Disclaimer: I’m fully aware this is an insane idea. That said…
Some time ago I started a project called coderun it was meant to make running code in docker containers and lambda stupid easy and secure. I kinda left it in a half-finished state because I wasn’t sure where I wanted to take the project.
It was a bit over-engineered but the idea was you could simply start the shell and execute any script and it would pull down the right container and run the code. It even had fancy plugins that acted like Little Snitch as well as FuseFS mounts for intercepting calls to sensitive files, allowing you to accept or block access. All though these were quite interesting, I think the more interesting part of the project is treating code, docker, and lambda the same, wrapped in a shell you know all too well.
i.e if you run ./test.py in this shell will deploy and run it in docker the same as it would in lambda (if it was running in the lambda mode).
Of course, Docker and Lambda are not the same things though… maybe I should have started with an ECS mode here instead. But anyway, I’ll think about that another time. What I want to try to figure out is how can we seamlessly bridge local and remote resources in a way that is as simple as possible. The rest of this post is mostly just going to be a collection of notes more than anything.
> ./first.sh && ./sometimes-second.sh; always-third.sh
Self explanatory, first.sh always executes, sometimes-second.sh executes if the first fails and always-third.sh executes every-time at the end.
> ./first.sh
We can do the same in docker fairly simply either by making a shell that tries to dynamically determine what the docker image we should be. This is what I attempted to do with coderun.
> ./first.sh && ./sometimes-second.sh; always-third.sh
A way simpler way to do what I tried to do above can be done using jessfraz’s setup. It also just runs in the normal shell, so nothing special going on here for &&, ;, etc…
./first.sh
coderun also attempted to do this similarly as the docker example.
> ./first.sh && ./sometimes-second.sh; always-third.sh
Using event machines we can define a fair bit of shell scripts. I’ve messed around a bit with this but lost my notes on it, but either way, though I’ll likely be coming back to this. We should be able to do stdin/out/err piping here as well.
I think this is the big question that I need to think about more.
{ Eventtopic: [start, stop, fail], Stdin: [queue|stdin|stdout|stderr], Stdout: queue,… Stderr: [queue|stdin|stdout|stderr], }
./echo.msh file:
#!/Users/ryan/Code/msh/out/msh dockerfile
FROM alpine:3
ENTRYPOINT ["echo"]
Running it:
> ./echo.msh Hello && echo world
Hello
world
Dockerfile’s get built with anonymous tags and executed.
The internal msh config when running this command is:
{
Stdin: os.stdin,
Stdout: os.stdout,
Stderr: os.stderr,
}
./cat.msh file:
#!/Users/ryan/Code/msh/out/msh dockerfile
FROM alpine:3
ENTRYPOINT ["cat"]
Running it:
> ./echo.msh "Hello world | ./cat.msh
Hello world
Dockerfile’s get built with anonymous tags and executed.
The internal msh config for ./echo.sh is:
{
Stdin: os.stdin,
Stdout: os.stdout,
Stderr: os.stderr,
}
It is passed through the pipe to ./cat.sh as the first line (newlines are removed from the JSON above). Msh in ./cat.msh receives this and knows that it is where it should look for stdout and stderr from the previous command. If os.stdout is being used as output the JSON is stripped before data from the pipe is passed to ./cat.sh.