(Some of) The Issues with Typical Software Consulting / by Chris Shaffer

“Accepting Delivery”

If you’ve spent some time in software, you’ve likely encountered this situation, or one similar to it, on several occasions:

We went through the demo, and everything looked good. After testing numerous cases, we accepted delivery.

As soon as the software was deployed, it stopped working. It was slow, it crashed... The consultants told us that it was because of ‘server issues’ and that our IT team needed to ‘tune’ them.

Our IT team seemed in over their heads. After weeks of traffic analysis, they imposed a rule - the new tool would only be accessible from one workstation (which had to be shared either physically or via remote desktop). This was an inconvenience, but it did ‘solve’ the problem.

What happened?

It turns out that the consultants didn’t think about database locks, write non-blocking queries, or design the data model and processing tasks to make that easier. Their software would always work perfectly in demos, when all of the potential users were gathered behind one keyboard, but was doomed to collapse as soon as a few users tried to do anything at the same time.

 

This wasn’t fraud - the consulting firm had survived for decades without learning how to write multi-user applications… in fact, without even learning that it was a thing one should or could do. From their perspective, their software always worked perfectly until the client’s sysadmins put their grubby hands on it. They kicked problems back over the fence, and were never challenged (or, at least, never specifically) because the clients weren’t knowledgeable enough to open up the code and tell them what they did wrong, so the consultants continued to believe their own story: the problems with their software were caused by clients’ sysadmins.

Are we “done” yet?

Here’s another one that might be familiar:

We built a simple data model to track our sales prospects. An ‘contact’ belongs to a ‘company’.
After rollout, we decided that we want to track contacts across multiple companies as they change jobs.

The original consultants tell us that they’ll build a new system that supports that, and we’ll then have to import our data from the old system into the new one. They’ll work with our team to develop SSIS packages that download the data from the old system, reformat it, and upload it into the new one.

For a period of several weeks, salespeople need to create or copy contacts manually, often two or three times. No one is entirely sure why.

A seasoned database or DevOps engineer might surmise that the original data model had a linkage (foreign key) directly from contact to company, and tell you that you “migrate” in place, leaving yourself free to develop the new functionality without any import/export step or other interruption to the users:

  1. create the necessary new contact-to-company relationship table (which foreign keys to both contact and company),

  2. write a script that “migrates” from the old table structure to the new one (without deleting the original data, by inserting one row into contact-to-company for every contact),

  3. test and run it, and

  4. deploy a new version of the application that reads from and writes to the new contact-to-company table (initially, without any new functionality).

  5. There’s vestigial column on the contact table that the new code ignores. It can be cleaned up now or later.

How did that happen?

Your original consulting relationship was built around the “deliverable.” New feature? New deliverable. Because the success or failure of the project was entirely determined by that one “finished” result, the development process the consultant used looked like:

  1. Create data model

  2. Run initial “seed” script

  3. Write code

Did the data model change? Go back to step one. That means starting with an empty database. That’s a new version of the software, and it requires a “new version” of the database. “Maintenance” and “support” were part of the plan, but “change management” and “integration” aren’t.

Why? It’s certainly cheaper to develop that way. Or, as with our first example, the consulting firm may not even be aware that this is a thing other people out there know how to do (or, if they are, they hold an incorrect view like “it requires some particular database technology… maybe one of those new schema-less databases, but we don’t have experience with them and those have drawbacks, from what we hear”).

How to Improve the Model

There’s a simple answer - to know those things and do them right.

Of course, advice that boils down to “be better” is of limited usefulness, especially to a customer. We want to plan for the things we don’t know. If you, as the customer, had that expertise, you probably wouldn’t have needed to hire the consultant in the first place.

Plan for those specifics (and more) in the contract

That’s a little more actionable. A contract that includes some specifics about “software performance” - specifying how many simultaneous users will be supported, and covering “continuous integration,” asking for a “how-to” guide to make schema changes, etc. might weed out some of the firms that don’t know how to do those things.

But these are only two prominent examples out of dozens that are likely to crop up on any given project. There’s no comprehensive list of all of the unknown unknowns relevant to a project that isn’t built, yet.

Consultants: It’s our job to gauge when something like continuous delivery is needed - and it’s basically table stakes today unless the ask includes some key phrases like “proof-of-concept” or “throwaway”.

Clients: Until that happens, it’s probably best to consider that the lowest bidder may have the lowest standards; the software industry has less of a consensus than others on what is the bare minimum (see “Levels and types of expertise”).

Computer code isn’t black magic, it’s simply a hyper-specific language designed to give instructions to something that can’t think for itself. Coding doesn’t require more physical exertion than typing; a contract that covers everything that the software needs to do, with no room for error, would be… the software. If you can’t feed your contract directly into a computer and be done with it, then it by definition requires human judgement to interpret.

Plan some wiggle room ahead of time for “things we didn’t spec out but may decide we want.” Examples help. I’ve learned to enumerate some things the statement-of-work won’t include in addition to what it will - the point of that exercise isn’t that you expect to have an exhaustive list of non-features, it’s so that when something inevitably comes up that isn’t explicitly in or out, you can ask which category it belongs in. You’ll never get to an exact line in the sand, but you should be able to get both parties to a shared understanding of “I know it when I see it.”

Clients: The less wiggle room you want to leave in the specs, the more expensive (or else flawed) they’ll be to write. More wiggle room requires more trust.

Consultants: If there’s no wiggle room in the spec, then a flawed spec will lead to flawed software. If the spec is out of your hands, it’s your job to set expectations that you may have to revisit it under another SOW.

Consulting vs. Contracting

A purist might argue that the examples above don’t pertain to consultants at all. Those are issues with contractors. A consultant provides expertise - part of a consultant’s job to anticipate the things that you should ask for but didn’t know to. A contractor provides labor to fulfill a contract - it’s not their job to tell a client what’s missing from the contract.

It’s not entirely unfair to say that the clients in those examples hired contractors when they needed consultants. In another industry, you might even hire a consultant to help write the contract and manage the contractors.

But the software industry, including contractors and consultants themselves, is rarely able to make that distinction properly. The terms are used interchangeably more often than not.

Levels and types of expertise

Part of this comes down to software being an extremely new industry, without specific agreed-upon terms for different levels and types of expertise.

Everyone can understand the difference between a partner at a law firm and a paralegal, or between a surgeon and a nurse. In software, we’re lucky to get a vacuous brag like “10xer”.

You don’t need insider knowledge to know that a plumber installing a toilet is responsible for connecting it to the water pipes (and not the toilet manufacturer). In software, it’s not uncommon that neither side of a conversation knows whether they’re talking about toilet manufacturing, plumbing, or both. There’s no shortage of market-leading CRMs rolled out without any accompanying plan to bring over the data from the old system.

Is your expertise…

  1. Operating complex software properly?

  2. Translating math equations into JavaScript (or another language)?

  3. Deciding on what language-level components (classes, controllers, repositories) need to be created?

  4. Writing SQL queries?

  5. Debugging database-related issues and improving performance of SQL queries?

  6. Writing code to take one-time or scheduled dumps from one system into another?

  7. Writing code to allow other systems to talk to yours?

  8. Designing data models?

  9. Figuring out how to, at an abstract level, transform data into something actionable?

  10. Creating an image of what the software will look like to a user?

  11. Deciding how those screens will behave, how a user will approach them, and how that affects their broader interactions?

  12. Clicking on things to see if they break?

  13. Making sure that software does what it was intended to do?

  14. Making sure that what it’s intended to do is actually what’s best for the business?

  15. Making team members are working on the appropriate projects, and focus on priorities?

  16. Ensuring that properly written code is executed properly?

  17. Ensuring that software is able to integrate and evolve?

  18. Writing one-time scripts to derive insights from data?

  19. Pushing data through various libraries to develop machine-learning models?

  20. Developing the neural networks embedded in said libraries?

I might call these…

  1. Sysadmin

  2. Programmer

  3. Software Developer

  4. Database Developer

  5. Database Administrator

  6. ETL (Extract-Transform-Load) Engineer

  7. API Developer

  8. Data Architect

  9. Information Scientist

  10. UI Designer

  11. UX Designer

  12. Tester

  13. QA Engineer

  14. Business Analyst

  15. Project Manager

  16. DevOps Engineer

  17. Systems Architect

  18. Data Scientist

  19. Machine-Learning Technician

  20. Machine-Learning Developer

But that’s nowhere near being a standard. If you put five people from the software industry into a room and showed them these definitions, you’d get seven different opinions on it.

There’s no shortage of programmers who call themselves “developers,” the vast majority of self-described “machine-learning engineers” are really just technicians (at best), and you can make an enemy of a UI designer pretty quickly by telling them they’re “not an architect.”

An enterprise project is likely to need twelve or more of these skill-sets in order to succeed, but many are lucky to get five or six. A startup that “just needs an app” might only need four or five of these skill-sets, but only get two.

Trust

There’s also room in all of this for interpretation. As with anything else, gray areas open up loopholes for someone on either side to operate in bad faith.

That’s a topic for another day, but (again, as with anything else) gray areas and wiggle room require trust, and that trust becomes more important as a project scales larger. No contract will protect you from someone executing it in bad faith.

Clients: It’s probably best to start with small gray areas with the expectation that you’ll need to expand them if you want to maximize value.

Consultants: I like to over-deliver on the first go-round, and raise rates to match on the second. Consider it a contract acquisition cost — you’re doing that instead of making sales calls; be careful not to set expectations that you’ll do more for less indefinitely.

The deliverable isn’t the product

The customer doesn’t want someone to develop a piece of software; they want someone to develop a new business capability (or reduce the cost of an old one).

A business still needs metrics, deliverables, and budgets… but it’s important to understand that those things are abstractions, not the goals themselves. Don’t chain yourself too tightly to them.

I like to think of a software demo or “delivery” like a teacher would think of an exam or paper back when you were in school - it’s an evaluation, a way of keeping accountability, but it’s not the reason you’re here.