We have flagged selling energy to the grid for the first time as not available for a direct user of Tesla and crypto. Being available is one of the five principles in the Data Delivery Checklist.
Where does an end-user fit into this?
For the issue Bard has given us, let’s imagine a new smart platform emerges to manage vehicle-to-grid transactions each day. Using AI, it brokers sales on the owner’s behalf, then charges them in crypto each quarter.
This transactional platform might solve a problem for the direct Tesla/crypto user. But let’s suppose a third-party industry analyst who manages community energy settlement now has a hurdle to getting real-time insights, due to this settlement delay.
Data fed by the transactional platform is not released in real-time due to a crypto regulatory validation that takes place (remember I’m making this up!). This might affect how intuitive settlement analysis is.
From the crypto transaction to the transactional platform to the settlement analysis… each stage adds value to the last, but each is dependant on what comes before. As described in this Reflectoring article.
Dependency Rule: each item depends on all the items upstream from its viewpoint
Value Rule: moving downstream, each step adds more value to the product
In the end, it always affects the consumer
This lag in reporting could hamper the ability to fairly estimate electricity costs for entire areas. This might restrict energy access for the most vulnerable in society, downstream end users who struggle to afford bills on pre-payment tariffs.
Design data products, don’t build them
The various actors in this energy-to-grid pipeline of value are able to account for these risks in their designs. But only if they incorporate proper design principles into how they approach their data products. They can’t see them as simple gateways between developers. They have to consider the wider journey. Let’s look at critiquing how you enable a journey.
User experience analysis can seem qualitative and messy. Checking if something does or doesn’t break is simpler. When you’re already building technical systems with less-than-ideal time and resources, you need to claw back simplicity where you can. So this kind of critique isn’t attractive.
But this practice isn’t a luxury. Assessing how well you support a user’s experience is ultimately assessing whether your product does or doesn’t break. Just because you don’t return error codes, doesn’t mean it works.
The Data Delivery Checklist 🚚
Data products are really delivery services. They take information and place it somewhere else. Just like a delivery service might process or package its cargo, data products can do things with the data while it is in their care.
The journey supported by your data delivery service needs to fulfil its obligation to the users. And you need to ensure your data product is not the weak link in the larger chain of value.
The Data Delivery Checklist can help, by enabling you to:
- critique existing journeys that feature your data product
- sense check ideas for a new or enhanced feature
- revisit data product heuristics to stay aligned on quality
- simplify data product language for stakeholders and colleagues
Here are the five heuristics that make up the checklist. For any data product, you need to ask the question…
Does this data delivery service enable journeys to be..
- available? (e.g. regardless of time, platform, expertise or location…)
- intuitive? (e.g. reacting to change, abstracting complexity, customisable…)
- informative? (e.g. offering feedback to actions, useful detail and event history…)
- familiar? (e.g. consistent with convention, governance and brand experience…)
- accurate? (e.g. no ‘wrong orders’, everything is checked, fix problems, trustworthy…)
Inspired by data and design excellence
Then I utilised the principles on multiple projects at Kaluza. I also scoured problem statements of dozens of other projects, annotating them with the checklist.
If a product team applies this together, they will unlock value in the following ways:
- Engineers will understand how their product impacts users with every micro-decision they make.
- Product managers will feel more connected to users when they prioritise scope.
- Designers will be more effective on deep dives, because low-level user research will have already been done by incumbent teams.
I have used the checklist to to organise product feedback and co-design output, prompt interview questions, coach teams and much more.
Some product teams simply won’t have time to conduct user research. But the occasional get-together to discuss users and journeys will be a great start. (The best way to incorporate design-thinking is to make it as convenient as possible.)
I will write more on how to apply these principles, this article is an introduction to the checklist. And this article is in fact another way for me to source feedback, so comments are welcome.