Case Study / Walk-through
It starts with your vision ...
If you already do strategic planning, that's a head start. If not, that's where we'll start. We're looking for 3-6 high-level priorities, a direction of where you see the business going next. Then, we'll consider how to apply technology to achieve each of those.
We develop a roadamp...
These high-level priorities turn to detailed project planning. This gives us estimated schedules, and helps us budget - the exercise will tell us if we need to hire more (or less) to reach milestones by the deadlines we set for ourselves. This roadmap also gets distilled into an executive summary.
This is where we get down to the technology part. What does the data model look like? What do we need to store, when do we need to store it, and where do we get it from? How does data need to flow through the application? What are the steps in any processes that will be tracked or performed by the technology? Where is the complex logic happening? Let's map it out!
Now it's time to set up your infrastructure. Cloud servers need to be commissioned, the communication channels between them need to be created, outlines of your application(s) need to be built. Builds need to be configured, deployments automated, environments that you can test in without jeopardizing users' data need to be constructed. Permission models need to be constructed so that you can control access to sensitive materials.
When a new engineering hire comes in, opens their laptop and says "where do I start?" - this is where we answer that question: "you need to clone this repo, connect to this database, and run this command to get everything running on your machine... now, write some code, push that code here, watch the tests run, write a few of your own, and ask this person for a review before it finally goes out into the wild."
Tracking our work...
You'll want some lightweight processes to break down each project item into a small set of sub-tasks as an engineer works their way through it, and track issues that come up. Trello boards, JIRA tasks, GitHub tickets... whatever works best for your team to make sure nothing falls through the cracks!
Testing our code...
It starts with a checklist. What are the cases I need to test? What should happen when I press this button? What are the edge cases? What are the categorical differences between inputs that might drastically change the expected behavior? Is there emergent complexity when I combine two of the test cases?
From that list, we can begin automate to automate those tests. These also serve as a form of historical documentation. What exactly did we expect to happen when a user pushed this button? Was there a reason this works this way or is it just something we didn't think of?
Last but not least, let's build a culture of keeping our co-workers up-to-date on what we're putting in front of users and when - an internal log of what's released each day, week, month, etc. that can be referenced.