This is one of the most difficult tasks in software engineering. Measuring quality needs to be something that’s objective. If it’s too subjective, then it’s impossible to track over time, and impossible to prove you’re building a great system. We track a few key metrics in our systems starting with the engineering, all the way to deployment and operationalisation.
Listen to your customers. They are the ones that will either recommend you, or sink you. As early adopters your beta user’s feedback is imperative to develop a solid product. Understanding what works, what doesn’t, and why in both cases is the key to success. This has nothing to do with technology. It’s just good old fashioned usability.
Track your support requests topics. If you’re receiving the same recurring support request, you haven’t built something well. Be objective, this feature either needs a reliability or usability review.
Your customer's feedback can surprise you
This is an elusive unicorn of software development. “Code Quality” is an often used, and in my opinion little understood problem. In true engineering form, people have tried to mathematically determine if code is of a determined “quality”. While some metrics are important, I believe the most imperative measures are the following.
· Peer Review: Is your code clear to your peers? If it is not, then you have not created quality code. Every project requires a team to succeed. All members of your team need to thoroughly understand all of the moving parts. To ensure this is always the case, we peer review every new feature and bug fix.
· Code Structure and Test Driven Development: I’m a big fan of the micro services approach. Not because it’s the latest and coolest trend, but because it keeps your code base simple and concise. Constantly refactor your code, and keep it simple and easily testable with mocks. If you can’t easily write a test for your code, you haven’t structured it well.
· Test Quality: This can be tough to gauge. Did you cover all of those pointy edge cases in your tests? What about security, subsystem failure, and others? Have your peers review your tests to ensure that you have done so. Code coverage in Go is quite helpful for this. A simple report will display the executed code, and the code that is not executed. It’s easy to forget a corner case, use this tooling to your advantage.
· Continuous Integration and Delivery: If you do this properly, your life becomes much easier as you grow. Code should pass all tests before the merge of a PR. After a merge, your code should build, create an image, and deploy itself to a development API environment. Making your latest API continually available ensures your consumers have the latest and greatest to build with, and ensures stability across all of your products.
Make sure to cover all those pointy edge cases!
Delivery cadence can be difficult, especially in the alpha phases as you’re building the core of your system. Rather than focusing on trying to build an entire product, it’s much easier to iterate incrementally. For instance, we had to prove call routing worked with our VOIP provider. We had 2 options, create a fully functional alpha, or hard code all routing, then simply test a static outbound and inbound call. Taking the latter approach allowed us to have a very rough POC after a few weeks. This then gave of the framework to incrementally develop our APIs.
Use SCRUM. Don’t get bogged down in the tooling, post-it notes work great. Make sure you’re calculating your velocity. Retrospectives are imperative to improve your process. If you aren’t completing your tasks in the time you’ve committed to, analyse why. Are your tasks too large, are you distracted, or are the tasks not clear? An honest retrospective is the only way to improve your process.
Fun and effective to track your velocity
Use tools to measure your uptime. Monitoring and alerting are imperative. You want to know you’re having issues before your customers notice. Operational resiliency should be a focus before any piece of code is ever created. Think about how a system will fail, and do your best to mitigate the potential failure without killing your delivery cadence.
Bugs and outages are inevitable, especially in the alpha and beta versions of software. There are 2 types of software delivery methodologies. Never make a mistake, release every 6 months spending massive amounts of time on QA, or be willing to make any mistake, but make it only once. With quality peer review, unit/integration tests with solid coverage, and CI/CD you will likely only make small mistakes, and you will only make them once.