Reducing friction
I always have too many projects on, and switching between them can be a problem. If I go back to a project I haven’t worked on in a while, I end up spending all the available time thinking things like “Is this really the same code as the live version?” and “How did I end up with ninety-three unpushed commits here?”
Reducing the number of projects is never likely to work for me, so clearly it is time to improve my development setup to make it all managable.
Switching between projects should be easy
Consistency is key here. It may not be possible to have everything work the same way, but I can come close.
I already have most things set up so that running gulp watch
starts a development web server and automatically builds all changes. That can be extended to also start up a database or anything else that may be needed.
If I don’t have a development environment already set up, I should be able to get everything I need with just git clone and npm install, and maybe npm link if I’m working on dependencies at the same time. No custom database setup or environment variables I can never remember.
I should not have to think how to deploy a new version
This means everything on a CI server for a start. In some ways that would be enough, but since I don’t want to spend a lot of time setting up scripts for each project, they should all deploy the same way as much as possible.
Something like heroku with hosted mongodb would work for a lot of my projects, but other stuff is more experimental, and needs more than a basic web server and database. That should not require setting up a completely new system. Also, with a large number of small projects, hosting them seperately could get expensive.
A previous attempt involved setting up git deployment on an EC2 instance, but that meant dealing with complex git commit hooks and a puppet script that needed complicated updates for each new app. And only using it for a few projects made it easy to forget how it worked.
Shared autoscalable EC2 instances with CodeDeploy for individual apps looks like being a good solution, though I’ll need to make sure that after the initial setup it is no more effort to work with than a fully managed system.
Unreleased apps should be usable with real data
Actually using an app is by far the best way to test its usability.
I have tried a few different approaches to this in the past, none ideal. Connecting to the development server is fast and easy, but it means having local data that can’t be deleted. Having the dev server connect to an online database solves that, but having the session store thousands of miles away from the app server isn’t exactly great for performance. In either case, nothing works when not on the local network. Using the deployed version of the app solves that, but makes it harder to test the latest updates.
This time I’m going to try making my continuous deployment setup good enough that there are never any significant features missing from the deployed version - test isolated functionality locally with test data while writing code, but any actual use is on the live environment.
All environments should be disposable
Keeping any system working requires effort, so I’d rather not need to - in both development and live environments I should be able to delete everything and rebuild from scripts. My git folder should be a lot easier to navigate if it contains only active projects rather than everything I have worked on in the last few years.
Live environments do require keeping the database up, but with a sufficiently automated replica set, individual servers can be disposable. I should not have to think about managing server patches or fixing a failed server - if something goes wrong or an update is needed, just shut it down and start a new instance.
I have a couple of projects which store data on local disk, which doesn’t fit too well with this. I think I can replace that with NFS - still a single point of failure that has to be maintained, but at least it is kept separate from everything else and unlikely to need much management compared to a full web server.
Anything with authentication or certificate requirements becomes a bit challenging when you disallow manually entering a password or uploading certificates. For the live setup, an encrypted write only S3 bucket with IAM instance roles should work. That won’t help with a local development environment, but it may be possible to avoid the requirement altogether there.
Tests should be easy to set up
I know I should use more unit tests, but getting the first one in place tends to be enough effort that I don’t get around to it. Test setup definitely needs to be part of my standard project template - Something that says “0/0 tests passed” isn’t all that useful in itself, but changing it to “0/1” is very easy.
Including ESLint as part of the basic test setup is also a good idea - it catches a lot of errors for something that takes almost no effort to set up.