You Don’t Own the Data — You’re Borrowing It: Why Customer Trust Defines the Future of AI
In the AI era, data stewardship isn’t a technical chore. It’s a moral and strategic obligation. Businesses must stop saying they own the data. They don’t. Their customers do.
📌 THE POINT IS:
Customers own their data. Businesses are merely custodians. Building a Trusted Data Environment is no longer optional; it’s how organizations earn the right to use that data in the first place.
The Ownership Fallacy
For decades, “the business” in large enterprises have talked about owning their customer data. But the legal and ethical landscape has changed. Regulations like the GDPR, California Consumer Privacy Act (CCPA), Brazil’s LGPD, and India’s DPDP Act all enshrine the idea that the individual owns their personal data. Businesses merely hold it on loan.
Under Article 17 of the GDPR, individuals have the “right to erasure” — the ability to request deletion of their data at any time. If businesses truly owned customer data, that right wouldn’t exist. The very fact that it does underscores the shift from corporate ownership to customer custodianship.
“Trust is the ultimate currency in the digital age.” – Satya Nadella, CEO, Microsoft
When leaders fail to internalize this, the results can be catastrophic. Data breaches, AI bias, and misuse of personal information don’t just create legal risk; they erode trust. And trust, once lost, is nearly impossible to rebuild.
Trust as the New Asset Class
Analysts at Deloitte and McKinsey consistently find that organizations with high trust scores outperform peers in customer retention, brand equity, and growth. According to Edelman’s 2024 Trust Barometer, 71% of consumers say they’re unlikely to buy from a company they don’t trust to protect their data.
Data, then, isn’t the new oil — trust is. The data itself has little value without a trusted relationship behind it. In the AI era, where companies ask customers to share more personal and behavioral data than ever, trust becomes the foundation of every model and every algorithm.
This brings new paradigms and challenges old assumptions for tech leadership and especially for business leadership teams. In the past, business teams assumed that tech was doing what they needed to in order to protect “their” data. Tech on the other hand often faults the business for not providing the funding required to adequately do so, and even when there is funding, technology is constantly under pressure to “go faster” to deliver business results. This is the mindset and culture at large companies that I’ve worked at and or studied till today. As we fly into a new world of agentic AI, autonomous systems, and intelligent decisioning, though, all of this is called into question. Enter the conversation about Data Products, continuous funding models, cross-functional teams that are not charged to deliver “business results”, but to deliver “customer outcomes.”
The whole way of running businesses powered by technology is turning upside down.
Shared Stewardship Across the Business
Too many organizations still treat data protection as IT’s responsibility. Every business function though — from marketing to finance — touches customer data and therefore shares accountability for how it’s handled.
The Chief Data Officer plays a vital role as educator and architect, but business leaders must lead the cultural charge. They are the ones customers will hold responsible when trust is broken. This shift requires a new kind of leadership: one that sees data stewardship as core to the customer experience, not as a back-office compliance function.
“If data is the lifeblood of AI, trust is the oxygen that keeps it alive.” – MIT Sloan Management Review, 2025 AI & Data Leadership Report
At Nationwide, we often remind ourselves that we’re not just an insurance company — we’re a protection company. That promise extends beyond physical assets to digital ones. The same way we safeguard our customers’ homes and livelihoods, we must protect their information with the same care and diligence.
That’s one of the reasons I’ve been leading the charge on building what I call a Trusted Data Environment. On the surface, it’s about ensuring that we can trust the results of our analytics and AI systems. But at its core, it’s about something deeper — ensuring that customers can trust us to use their data responsibly, transparently, and securely. Because if they can’t trust us to protect their data, why should they trust us with anything else?
Building this environment isn’t just a technology project. It’s a cultural one. It’s causing us to relook at development practices; cybersecurity policies; the partnership between the business, their tech counterparts, enterprise data, and cyber teams. Everything is getting called onto the carpet as we careen towards a future that demands high-speed throughput by autonomous machines that ultimately we all need to be able to trust as they help us simplify and enhance the customer experience.
But most of all, this all demands that business and technology leaders work together to define what “trust” even means in operational terms — from how data is shared to how models are validated and monitored. Recently we had a discussion during which I reminded my peers and senior leaders that although we’re spending a lot of time talking about building AI solutions, we can’t forget that we also need to think about securely running those solutions over their lifetime. This means investment in runtime monitoring and other automated controls that will maintain trust in our system without relying on humans, who even when they’re in the loop, may not be able to see or respond fast enough to an issue bubbling up.
The AI Trust Loop
AI depends on data quality and provenance. But data quality requires that business teams and technology partners are building the controls up front into their systems. That creates a self-reinforcing loop: when you demonstrate to customers that you can be trusted with their data through good hygiene and system controls, they share better data; better data powers more reliable AI; reliable AI reinforces customer trust.
Conversely, when data is misused or mishandled, the loop collapses. Models degrade, customers withdraw, and regulators step in. Gartner predicts that by 2027, 60% of enterprises will use trust metrics as a core KPI for AI performance. This isn’t a compliance trend; it’s a business survival strategy.
Leadership Imperatives
Replace “ownership” with “custodianship.” The words you use shape your culture.
Make trust measurable. Track customer data sentiment and correlate it with retention.
Elevate the CDO as a cultural leader, not just a technologist.
Treat every data touchpoint as a trust transaction.
Embed privacy-by-design into all AI initiatives at every level of the development and environmental stack.
In the end…
The strongest moat in the AI age won’t be proprietary data or faster models. It will be earned trust. Customers are paying attention. They know their data is theirs. The question for today’s leaders is simple: what are you doing to earn and prove that they can trust you enough to let you use it?
References
General Data Protection Regulation (GDPR), Article 17 – Right to Erasure. https://gdpr-info.eu/art-17-gdpr/
Edelman Trust Barometer 2024. https://www.edelman.com/trust/2024/trust-barometer
Deloitte Insights – Building Trust in AI. https://www2.deloitte.com/us/en/insights/focus/trust/ai-trust.html
McKinsey Digital – The State of AI in 2025. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/state-of-ai-2025
MIT Sloan Management Review – Five Trends in AI and Data Science for 2025. https://sloanreview.mit.edu/article/five-trends-in-ai-and-data-science-for-2025/
Gartner Predicts 2027 Trust Metrics in AI. https://www.gartner.com/en/newsroom/press-releases



