MilikMilik

AI Productivity at Scale: What Snowflake’s Surge and Europe’s New Rules Signal for Your Workplace Tools

AI Productivity at Scale: What Snowflake’s Surge and Europe’s New Rules Signal for Your Workplace Tools

Snowflake AI productivity: when automation becomes a growth engine

Snowflake’s recent performance shows how deeply AI is now embedded in enterprise productivity. Its CEO, Sridhar Ramaswamy, says artificial intelligence is both driving sales and fundamentally changing how customers use its cloud data platform. Beyond headline revenue growth to USD 4.7 billion (approx. RM21.9 billion), the more instructive shift is how AI has shortened the time to value for clients and internal teams alike. Tools such as the Cortex Code agent let customers deploy AI agents in a fraction of the previous timelines, shrinking projects that took months into far shorter cycles. Inside Snowflake, designers are now expected to write code and build working prototypes directly on real codebases, collapsing traditional handover-heavy workflows. The result is concrete productivity gains: tasks that previously took weeks are now completed in hours, reshaping expectations of how data and software teams work.

Inside enterprise AI tools: from smarter queries to automated insight

Snowflake’s approach illustrates how enterprise AI tools are evolving from add-ons into embedded productivity layers. Instead of expecting every analyst or engineer to write complex scripts, AI-driven features help users ask smarter questions of their data with natural language-style queries and guided prompts. AI agents can automate routine coding, generate boilerplate data pipelines and suggest optimisations, reducing the friction of setting up and maintaining data workflows. This makes it easier for non-specialists across finance, marketing or operations to interact with data platforms and uncover insights quickly. In Asia, Snowflake reports that customers are pushing hard to modernise data with AI, encouraged by national strategies in places like Singapore that treat AI as an enabler across industries. For Malaysian organisations using similar cloud platforms, these patterns point to an emerging norm: productivity will increasingly depend on how well AI is woven into everyday tools, not just on raw cloud capacity.

EU AI Act impact: why universities may have to ‘change everything’

While productivity gains accelerate, Europe is moving in the opposite direction on governance, especially for education. The EU AI Act, which began coming into force in August 2024, classifies AI used for student assessment as “high-risk”, placing it alongside hiring and credit scoring systems. According to Thomas Jørgensen of the European University Association, many academics are informally using tools like ChatGPT to grade work without institutional oversight—practices that could become illegal once detailed guidelines from the EU AI Office arrive. Even universities that have built their own assessment tools will face strict requirements around transparency, training data and risk management. Jørgensen warns that institutions may need to “change everything” about how they deploy AI, revisiting teaching, assessment and research workflows. The same concerns extend into research, especially in health-related fields where data privacy and regulatory compliance are already sensitive and now intersect with AI use.

The new tension: rapid AI adoption versus workplace compliance

Put together, Snowflake’s AI productivity story and the EU’s regulatory push reveal a growing tension. On one side, enterprises are racing to adopt AI to speed up coding, automate analysis and streamline decision-making. On the other, regulators are sharpening requirements for safety, transparency and governance, especially where AI affects people’s futures—such as students, job applicants or patients. Universities are an early test case, but the same logic will apply to corporate AI tools used in performance reviews, automated approvals or risk scoring. Organisations cannot assume that using a major commercial model or cloud platform automatically keeps them compliant. Instead, they will need clarity on what AI is doing inside their workflows, how decisions are made and who is accountable. The emerging global norm is that AI workplace compliance will sit alongside cybersecurity and data protection as a core governance obligation, not a side project for innovators.

What Malaysian organisations should do now: governance before enforcement

For Malaysian companies and universities, neither Snowflake’s trajectory nor Europe’s rules are distant issues. Many already rely on global cloud platforms, LLM-based assistants and automated analytics—sometimes with little documentation of data flows or decision points. Even without an EU-style law in Malaysia, several risks are clear: cross-border data transfers that clash with sectoral rules, dependence on vendors that might not meet future standards, and fragmented internal practices where staff quietly use public AI tools for grading, HR or analysis. A pragmatic first step is to audit current AI tools and use cases, including unofficial ones, and map what data they touch and where it is stored. Next, draft internal AI policies covering acceptable use, human oversight of high-impact decisions and minimum documentation for AI-assisted processes. Finally, ensure key vendors can explain their models’ behaviour and governance, so that if EU-like regulations spread regionally, your organisation is ready rather than reactive.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!