- Ability of a system to grow and manage increased traffic.
- Increased volume of data or requests.
- Our goal is we want to achieve this growth without a lose in performance.
- Bad system design could result in a bottleneck on the number of users or traffic our application can handle, or could result in exponentially increasing cost to server a small increased traffic.
- Probability a system will fail during a period of time.
- Reliability for software is slightly harder to define than hardware reliability. Software may have degrees of reliability.
- Overall system is reliable if it keeps working even when software or hardware components fail.
- So that means we need system in place like automated testing to prevent bugs from being deployed to production.
- You also need tools that can predict and compensate for hardware failure so that before a server even fails, you can be notified and preemptively take that server offline and repair it before it starts serving bad requests.
A common way to measure Reliability is Mean Time Between Failure (MTBF) .
Here is how to calculate MTBF: MTBF = (#total elapsed time - #total downtime) / #number of failures
For instance, the total elapsed time is 24 hours, and your total downtime is 4 hours, there are 4 failures. Therefore, the MTBF is equal to (24 hours - 4 hours) / 4 failures = 5 hour MTBF.
Amount of time a system is operational during a period of time. This is probably the most important metrics when it comes to your users, whether your site actually works and what percent of the time it works.
Poorly designed software requiring downtime for updates is less available.
The metrics for Availability is pretty straightforward: Availability % = (available time / total time) x 100.
For example, yor site is available for 23 hours, and therefore the availability percentage is (23 hours / 24 hours) x 100 = 95.83%.
Here is a quick reference table for general availability percentage annually:
|3 days, 15 hours, 40 mins
|8 hours, 46 mins
|52 mins, 36 secs
Reliability vs Availability
- Reliable system is always an available system.
- Availability can be maintained by redundancy, but system may not be reliable. An example would be the Microservice Architecture where you can easily launch a new replica without damaging the system availability.
- Reliability software will be more profitable because providing same service requires less backup resources.
- Requirements will depend on function of the software.
Using plane as an example, if you need availability to routine maintenance, you could hire a fleet and have backup plane to rolls out and take over that flight. But for a plane, the most important thing you want to make sure it’s reliable because that plane in the air you do not want a failure.
- How well the system performs
- Latency and throughput often used as metrics.
Latencymeans how long does a request takes to get back to the user.
Throughputmeans the total amount of requests and traffic your system can handle.
- Speed and difficulty involve with maintaining system
- Observability, how hard to track bugs.
- Difficulty of deploying updates.
- Want to abstract away infrastructure so product engineers don’t have to worry about it.