Brazil’s AI boom is now visible in the data. Recent studies described the country as the world’s most AI-addicted market, underlining how quickly AI is spreading through the enterprise. That adoption is making AI governance in Brazil a business-critical issue, particularly as many organizations struggle to align policy with what is actually happening at the endpoint
In our latest HexBites edition, Sergio Pohlmann, Global CISO and AI governance strategist with more than 14 years in IT and a close view of Latin America’s security landscape, points to a pattern he often sees:
Organizations are writing more detailed AI policies, building oversight structures, and holding mature conversations about responsible use. Yet, at the same time, many are losing visibility at the endpoint, where employees actually use these AI tools.
Pohlmann calls this “Governance Paradox”. A company can look mature on paper and still be exposed in practice if it has not made AI use visible and controllable where work gets done.
Where AI governance starts to break down
The first sign is shadow AI.
Many companies restrict public large language models, yet employees still turn to them to move faster, summarize documents, write code, or prepare presentations. That behavior tells its own story. “If your employees are accessing unsanctioned AI tools to maintain productivity, your governance is failing to provide secure alternatives,” Pohlmann said.
That insight matters because it shifts the conversation. Shadow AI is not only a policy violation. It is often a sign that policy has been written without enough regard for how work actually happens. When secure options are slow, unclear, or too limited, employees will look elsewhere.
Another gap that Pohlmann explores is the way organizations approach AI risk. Enterprises are spending serious time on bias, accountability, and responsible AI use, and those conversations matter. The problem begins when AI ethics discussions are not matched by technical control. Governance starts to look symbolic when security teams still cannot see which devices are sending sensitive material to external models, which browser-based AI tools are in use, or where local credentials and API tokens are being exposed.
There is also a visibility problem that older security models were not designed for. In the past, data exfiltration often looked like a file leaving the network, a large download, or a suspicious transfer. In AI workflows, it can look like an ordinary prompt. A few copied lines can contain source code, customer records, internal plans, or contract language. “In the AI era, even data theft can be disguised as a complex prompt,” Pohlmann said.
That is where the governance gap becomes real. The policy may exist. The committee may exist. The actual act of exposure can still happen in a browser window on an employee device, in plain sight and yet easy to miss.
Swipe through our latest HexBites edition featuring Sergio for a quick visual read of these warning signs.
Where AI governance turns into real control
For Pohlmann, governance only matters if it reaches the endpoint. That is where policy becomes observable, enforceable, and measurable.
He points to three priorities security leaders should focus on.
1. Rethinking DLP for AI workflows
Traditional data loss prevention was not designed for the way employees now use AI. That is why contextual DLP matters. In most environments, legacy DLP still relies heavily on keywords and static rules, which makes it too blunt for prompt-driven workflows. Security teams need visibility when large blocks of code, customer records, or other sensitive material are being fed into AI tools through browsers and productivity apps, not just whether a certain word appears in the text.
2. Extending identity to AI agents and plugins
Identity now extends beyond human users. Employees increasingly rely on AI agents, browser extensions, and plugins that can access enterprise data on their behalf. That changes the trust model. “Identity is no longer just about human users,” Pohlmann said. A Zero Trust approach needs to cover these tools as well, so only verified applications and agents can interact with sensitive data at the endpoint.
3. Monitoring local AI processing
The same is true for local AI processing. As smaller language models begin to run directly on user devices, more data handling moves back to the endpoint. That changes the shape of the problem. Traditional network monitoring will not catch everything happening inside the device. Security teams need visibility into local AI processes, memory integrity, and how data moves between applications at the endpoint.
Why endpoint visibility is now central to AI governance
What makes Pohlmann’s view useful is that it brings AI governance back to something practical. The challenge is not writing broader principles or building more committees. It is to close the gap between formal governance and the reality of how AI is being used across the business.
That has implications for IT leaders. The real issue is no longer whether employees are using AI. It is whether that use is visible, controlled, and aligned with the organization’s policies at the point where work gets done. The endpoint is where governance either becomes enforceable or begins to unravel.
What this means for CISOs
For CISOs, the takeaway is fairly clear. AI governance now has to be judged less by the strength of policy documents and more by how well organizations can see and control AI use in practice.
The priorities that stand out:
- Implement contextual DLP
- Extend identity management to AI agents and plugins
- Strengthen monitoring of local AI processing
Security teams need to understand the context in which sensitive data, including code, customer records, and internal content, is being entered into AI interfaces.
Governance now has to account for the non-human tools that increasingly act on behalf of users and interact with enterprise data.
As smaller language models begin running on user devices, security teams need visibility into local AI processes, memory integrity, and data flows at the endpoint.
For organizations in Brazil, that shift matters. AI adoption will keep growing. The organizations that stay ahead of the risk will be the ones that treat endpoints not as a secondary concern, but as the place where governance has to prove itself.