Publish Static Websites, Docker Containers or Node.js Apps Just by Typing: now

A few days ago, in the latest episode of the Reclaim Today video show (any chance of an audio podcast feed too?) Jim Groom and Tim Owens chatted about Zeit Now [docs], a serverless hosting provider.

[Update, 2/4/20: fly.io/ offers something similar, exoframe offers a self-hosted variation along the same lines.]

I’d half had an opportunity to get on on on the call when the subject matter was mooted, but lacksadaisicality on my part, plus the huge ‘sort my life out’ style “what did I/didn’t say say?”  negative comedown I get after any audio/video recording means I missed the chance and I’ll have to contribute this way…

First up, Tim already knows more about Zeit Now than I do.

Once you install the Zeit Now client on your desktop, you can just drag a folder onto the running app icon and it will be uploaded to the now service. If there’s a Dockerfile in the folder, a corresponding container will be built and an endpoint exposed. If there’s no Dockerfile, but there is a package.json, a node.js application will be created for you. And if there’s no Dockerfile and no package.json file, but there is an index.html page, you’ll find you’ve just published a static website.

As well being able to drag the project containing folder on to the now icon, you can cd into the folder on the command line and just type: now. The files will be pushed and the container/node.js app/website created for you.

If you prefer, you can put the files into a Github repo, and connect the repo to the Zeit now service; whenever you make a commit to the repo, a webhook will trigger a rebuild from the repo of the service running on now.

This much (and more) I learned as a direct consequence of Reclaim Today and a quick read around, and it’s way more powerful than I thought. Building node apps is a real bugbear for me – node.js always seems to want to download the internet, and I can never figure out how to start an app (the docs on any particular app often assume you know what to type to get started, which I never can). Now all I need to do is type: now.

But… and there are buts: in the free plan, the resource limits are quite significant. There’s a limit on the size of files you can upload, and in the Docker case there seems to be a limit on the image size or image build size (I couldn’t see that in the pricing docs though, although the Logs limit looked to be the same as the limiting size I could have on a container?

It looks like you can run as many services as you want (the number of allowed deployments is infinite, where I think a deployment equates to a service: static web app, node.js app, Docker container) although you can hit bandwidth or storage caps. Another thing to note is that in the free plan, the application source files are public.

If anyone would like to buy me a coffee towards a Zeit Now subscription, that would be very nice:-). The Premium plan comes in at about a coffee a week…

Prior to watching the above show, my engagement with Zeit Now had been through using datasette. In the show, Tim tried to install datasette by uploading the datasette folder, but that’s not how it’s designed to be used. Instead, datasette is a Python package that runs as a command-line app. The app can do a several things:

  • launch a local http server providing an interactive HTML UI to a SQLite database;
  • build a local Docker container capable of running a local http server providing an interactive HTML UI to a SQLite database;
  • create a running online service providing an interactive HTML UI to a SQLite database on *Zeit Now* or *Heroku*.

So how I had used datasette with Zeit Now was through the following – install the datasette python package (once only):

pip install datasette

And then, with the Zeit Now app installed and running on my local computer:

datasette publish now mydb.sqlite

This is then responsible for pushing the necessary files to Zeit Now and displaying the URL to the running service. Note that whilst the application runs inside a docker container on Zeit Now, I don’t need Docker installed on my own computer.

It struck me last night that this is a really powerful pattern which I complements a workflow pattern I use elsewhere. To whit, many Python and R packages exist that create HTML pages from templates for displaying interactive charts or maps. The folium package, for example, is a Python package that plays nicely with Jupyter notebooks that can create an embedded map in a notebook. What’s embedded is an HTML page that is generated from a template. The folium package can also add data to the HTML page from a Python programme, so I can load in a location marker or shapefile datasets into python and then push it into the map without needing to know anything about how to write the HTML or javascript needed to render the map. In the R world, things like R‘s leaflet package do something similar.

This pattern – of Python, or R, packages that can create HTML assets – is really useful: it means you can easily create HTML assets that can be used elsewhere form a simple line or two of Python (or R) code.

This is where I think the datasette publish pattern comes in: now we have a pattern whereby a package generates not only an HTML application (in fact, an HTML site) but also provides the means to run it, either locally, packaged as a container, or via an online service (Zeit Now or Heroku). It should be easy enough to pull out the publish aspects of the code and use the same approach to create a python package that can run a Scripted Form locally, or create a container capable of running it, or push a runnable Jupyter notebook scripted API to Zeit Now.

On a related point, the Reclaim Today video show mentioned R/Shiny. As Tim pointed out, R is the programming language, but Shiny is more of an HTML application development framework, written in R. It provides high level R functions for creating interactive HTML UI interfaces and components and binding them to R variables. When one of the running HTML form elements is updated, the value is passed to a corresponding R variable; other R functions can respond to the updated value and generate new outpuys (charts, filtered datatables etc) which are then pass them back to shiny for display in the HTML application. As with things like folium the high level R functions, in this case, are responsible for generating much of the HTML / Javascript automatically.

Thinks: a demo of the datasette publish model for a Shiny app could be quite handy?

A couple more things that occurred to me after watching the video…

Firstly, the build size limits that Zeit Now seems to enforce. Looking at the Dockerfile in the datasette repo, I notice it uses a staged / multi-stage build. That is, the first part of the container builds some packages that are then copied into a ‘restarted’ build. Building / compiling some libraries can require a lot of heavy lifting, with dependencies required for the build not being required in the final distribution. The multi-stage build can thus be used to create relatively lightweight images that contain custom built packages without having to tidy up any of the heavier packages that were installed simply to support the build process. If a container intended for Zeit Now breached resource requirements because of the build, that could block the build, even if the final container is quire light (I’m not sure if it does work like this, just let me think this through as if it does…) One alternative would be to reduce the multi-stage build to a single stage Dockerfile, replacing the second stage FROM with a set of housekeeping routines to clear out the build dependencies (this is what multi-stage presumably replaces?) but this may still hit resource limits in the build stage. A second approach would be to split the multistage build into a first stage build that creates, and tidies up, a base container that can be imported directly into a standalone second stage Dockerfile. This is fine if you can create and push your own base container that the second stage Dockerfile could pull on. But if you don’t have Docker installed, that could be difficult. However, Docker Hub has a facility for building containers from github repos (Docker hub automated builds) in much the same way Zeit Now does. So I’m wondering – is there a now like facility for pushing a build directory to Dockerhub and letting Dockerhub build it without resource limitation in the build step, and then let Zeit Now pull on the finally built, cleaned up image? Or is the fallback to build the base (first stage) build container on Docker hub from Github repo? (Again, there is the downside that the build files will be public.)

The second point to mention is one that relates to next-generation hosting (consider Reclaim’s current hosting offering where users can run prepackaged applications from CPanel as first generation; as far as publishing running containers goes, the current generation model might be though of as using something like Digital Ocean to first launch a server, and then running a container image on that server.).

The Zeit Now model is essentially a serverless offering: end users can push applications to a server instance that is automatically created on their behalf. If I want to run a container in the cloud using docker from my desktop, the first thing I typically need to do is launch a server. Then I push the image to it. The Zeit Now model removes that part of the equation – I can assume the existence of the server, and not need to know anything about how to set it up. I can treat it simply as a Docker container hosting service.

The Zeit Now desktop tools make it really easy to use from the desktop, and the code is available too, but the Github linked deployment is also really powerful. So it’s perhaps worth trying to pick apart what steps are involved, and what alternative approaches are out there, if Reclaim was to pick up on this next generation approach.

So let’s find some entrails… It seems to me that the Zeit Now model can be carved up in several pieces:

  • source files for your website/application need to be somewhere (locally, or in Github)
  • if you’re running an application (node.js, Docker) then the environment needs setting up or building – so you need build files
  • before running an application, you need to build the environment somewhere. For Docker applications, you might want to build a base container somewhere else, and then just pull it in directly in to the *Zeit Now* environment.

To deploy the application, you need to fire up an environment and then add in the source files; the server is assumed.

So what’s needed to offer a rival service is something that can:

  • create server instances on demand;
  • create a custom environment on a server;
  • allow source files to be added to the environment;
  • run the application.

An example of another service that supports this sort of behaviour is Binderhub, an offshoot of the Jupyter project. Binderhub supports the build and on-demand deployment of custom environments / applications built according to the contents of a Github repository.  Here’s an example of the TM112 environment running on MyBinder Binder and here’s the repo.) An assumption is made that a Jupyter environment is run, but I’m guessing the machinery allows that assumption to be relaxed? Binderhub manages server deployment (so it supports a next generation hosting serverless model) as well as application build and deployment.

Supporting tooling includes repo2docker which has some of the feel of datasette‘s local container build model in that it can build a docker container locally if docker is installed from a local directory, but will also build a container from a Github repository. A huge downside is that a local Docker install is required. A --push switch allows the locally built container to be pushed to Docker hub automatically. (The push is managed using the docker desktop application, which requires the installation of docker – which may be tricky for some users. What would be handy would be a standalone dockerhub CLI which would support remote, automated builds from a source directory pushed to Docker hub, as per Zeit Now and as already mentioned above). Hmmm… seems like someone has been working on a Docker hub CLI but it looks like it’s just file listing operations that are supported; the remote build may not be possible from the Docker hub side unless we could find a way of co-opting how the automated build from github pulls work?

One of the things I am trying to lobby for in the OU is a local Binderhub service (it would also be nice if there was a federated Binderhub service where separate organisations could volunteer compute resource in accessed from a single URL…) One thing that strikes me is that it would be nice to see a  localrepo2binder service that could push a local directory to a Binderhub instance, rather than require Binderhub to pull from a github repo. This would then mimic Zeit Now functionality…

PS this looks handy – exoframe (about) – like a self-hosted Zeit Now. As with many nodejs apps I’ve tried to try, I can’t seem to be able to install an appropriate nodejs version properly enough to run the (client) app:-(

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

4 thoughts on “Publish Static Websites, Docker Containers or Node.js Apps Just by Typing: now”

  1. I am with Timmy, I think the infrastructure, nextgen hosting thread is going to be ongoing for Reclaim Today, so we should definitely plan to talk. In fact, I would love to have a show about BinderHub, and what you are trying to lobby for at OU, so maybe that would be a brilliant episode we can plan for some time this week? I am very interested in this idea of serverless, and it complements my attempt to rest with headless web development, we should call our new company Hostless :)

    This write-up is amazing, Tony, thanks for making so much of this clearer.

Comments are closed.