IS 5320 – Hrishabh Kulkarni

Hrishabh Kulkarni – IS 5320

Category: Uncategorized

  • Summary Post – HW 10

    Summary Post – HW 10


    Time Log — Teams’ Sites

    (Time spent visiting and commenting on other Teams’ sites)

    Date: Mar. 10, 2026 From: 09:05am To: 09:17am
    Date: Mar. 10, 2026 From: 06:10pm To: 06:22pm
    Date: Mar. 11, 2026 From: 10:15am To: 10:27am


    Time Log — Students’ Sites

    (Time spent visiting and commenting on other students’ sites)

    Date: Mar. 10, 2026 From: 10:10am To: 10:21am
    Date: Mar. 10, 2026 From: 07:30pm To: 07:41pm
    Date: Mar. 11, 2026 From: 11:05am To: 11:16am
    Date: Mar. 11, 2026 From: 08:15pm To: 08:26pm


    Essay I — Summary of Content Activities

    This week, I created two new blog posts continuing my AI trends series for 2026. The first post covers Context Engineering, exploring how the AI industry is moving beyond simple prompt engineering toward designing the entire information environment an AI system operates in — including memory, live data retrieval, user context, and governance rules. The second post covers AI Governance and Responsible AI, breaking down how landmark regulations like the EU AI Act are reshaping how every AI system gets built, deployed, and audited in 2026. Both posts include properly cited images, are open to visitor comments, and have been assigned relevant categories and tags. I updated the General Menu to reflect both new posts under the AI category and added them to the HW10 section of the HWs Menu for grading purposes. I also visited all Teams’ and students’ sites throughout the week, left thoughtful comments on posts I engaged with, and moderated and approved all incoming comments through the WordPress admin dashboard. Additionally, I monitored my site traffic daily via Google Analytics 4 and connected my GA4 data source to a Looker Studio report to visualize my KPIs.

    New Content Published This Week:


    Essay II — Summary of KPI Table

    For this week’s assignment, I developed a KPI table with three clearly defined goals to measure the performance and engagement of my website using Google Analytics 4 data. The first goal focuses on tracking browser usage distribution — measuring how many views are generated by each browser type (Chrome, Edge, Safari, Firefox, and Samsung Internet) — visualized through a bar chart in Looker Studio. The second goal analyzes content engagement by measuring the percentage of views per content type, helping identify which categories of posts resonate most with my audience. The third goal monitors user activity over time by tracking the number of active users per day, visualized through a line chart in Looker Studio to reveal traffic trends and patterns across the week. Together, these three goals provide a comprehensive view of both my audience’s technical behavior and their content preferences, allowing for data-driven decisions on what to publish and how to optimize the site experience going forward.

    KPI Table:

    GoalKPIsMetrics
    Goal 1 – Track browser usage distributionNumber of views per browser type (Chrome, Edge, Safari, Firefox, Samsung Internet)Bar chart in Looker Studio report
    Goal 2 – Analyze content engagementPercentage of views per content typePercentage of views per content type
    Goal 3 – Monitor user activity over timeNumber of active users per dayLine chart in Looker Studio report

    Essay III — Summary of Looker Studio Report

    This week, I connected my Google Analytics 4 property (IS5320) to Google Looker Studio and built a custom report aligned with the three KPIs identified in Part II. For Goal 1, I created a bar chart displaying the number of page views broken down by browser type — the data clearly showed Chrome as the dominant browser among my visitors, followed by Safari and Edge, providing useful insight into which browsers to prioritize for compatibility testing. For Goal 2, I built a scorecard and table visualization showing the percentage of views per content type — AI-related posts consistently drove the highest engagement, confirming that my audience is primarily interested in technology and AI content. For Goal 3, I created a line chart tracking the number of active users per day over the reporting period — the chart revealed noticeable traffic spikes on the days new blog posts were published, demonstrating a direct correlation between content publishing frequency and daily user activity. The completed Looker Studio report has been downloaded as a PDF and submitted separately to Canvas as required.

  • AI Governance & Responsible AI

    AI Governance & Responsible AI — The Rules That Will Shape Every AI System in 2026

    For years, AI development moved fast and asked questions later. Build it, ship it, fix it when something goes wrong. That approach worked — until AI started making decisions that affected millions of people’s lives.

    In 2026, the rules of the game are fundamentally changing. AI Governance and Responsible AI are no longer optional ethics exercises — they are legally binding, globally enforced, and becoming the defining framework for how every AI system gets built, deployed, and monitored.

    So, What Exactly Is AI Governance?

    AI governance is the set of policies, regulations, frameworks, and oversight mechanisms that ensure AI systems are safe, transparent, fair, and accountable. It answers the questions that pure technology cannot: Who is responsible when AI makes a wrong decision? How do we ensure AI doesn’t discriminate? What data can AI be trained on?

    Think of it this way: if AI is a car, AI governance is the entire system of traffic laws, safety standards, insurance requirements, and driving licenses that make sure that car doesn’t cause harm on the road. You can build the fastest car in the world, but without governance, it’s just a danger waiting to happen.

    Why Is It Exploding Right Now?

    2026 is a landmark year for AI regulation globally — and the pressure on organizations is intensifying fast:

    • The EU AI Act’s first major enforcement cycle begins in 2026, covering high-risk AI systems used in hiring, healthcare, credit scoring, and law enforcement — with penalties reaching up to 7% of global annual turnover for violations
    • High-risk AI systems must now undergo pre-deployment risk assessments, extensive documentation, post-market monitoring, and incident reporting before they can be deployed in EU markets
    • The EU AI Act has already required AI literacy training for all employees working with AI since February 2025, making governance a workforce issue, not just a legal one
    • ISO/IEC 42001 — the international AI management standard — is being rapidly adopted globally as organizations build formal AI governance frameworks
    • Companies are creating dedicated “AI Governance Officer” roles, following the precedent of GDPR’s Data Protection Officers — a sign that governance is becoming a full-time, C-suite concern

    Real-World Applications You’ll See Everywhere

    AI governance isn’t just a legal checkbox — it’s reshaping how AI gets built across every industry:

    • Healthcare: AI diagnostic tools must now maintain full audit trails, explainability reports, and human oversight protocols before deployment in clinical settings
    • Hiring & HR: AI screening tools face strict bias audits and transparency requirements — candidates must be told when AI is involved in decisions affecting them
    • Finance: Credit scoring and fraud detection AI must document decision logic and provide appeal mechanisms for affected customers
    • Law Enforcement: Facial recognition and predictive policing AI face the strictest restrictions — several high-risk uses are outright banned under the EU AI Act
    • Enterprise AI: Every organization deploying AI must maintain a model inventory — a register of all AI systems in use, their risk level, and their compliance status

    What This Means for You

    Whether you’re a developer, a business owner, or a student entering the AI field — AI governance is not someone else’s problem. It is the new foundation every AI system must be built on.

    The developers and organizations that treat responsible AI as a competitive advantage — not a compliance burden — will be the ones that earn user trust, avoid massive penalties, and build AI that actually lasts. In 2026, the most important question isn’t just “Can we build this AI?” It’s “Should we — and if so, how do we make sure it’s safe, fair, and accountable?”


    References:
    OneTrust. (2026, February 17). Where AI regulation is heading in 2026: A global outlook. https://www.onetrust.com/blog/where-ai-regulation-is-heading-in-2026-a-global-outlook/
    Orange Business. (2026, January 7). Data & AI trends for 2026: Governance, regulation, sovereignty. https://perspective.orange-business.com/en/data-ai-trends-for-2026-governance-regulation-sovereignty-and-the-shift-to-autonomous

  • Context Engineering

    Context Engineering — The Skill That’s Replacing Prompt Engineering in 2026

    Remember when everyone was talking about “prompt engineering” as the hottest skill in AI? How you phrased your question determined everything?

    That era is ending. In 2026, the real competitive edge isn’t about crafting a clever prompt — it’s about Context Engineering. And if you’re building anything with AI today, this is the concept that will define whether your system actually works or constantly disappoints.

    So, What Exactly Is Context Engineering?

    Prompt engineering was about how you asked the question. Context engineering is about what the AI sees before it even begins to answer.

    Think of it this way: prompt engineering is like coaching an employee right before a meeting — last-minute instructions, hoping they go well. Context engineering is like giving that employee full access to the company’s entire knowledge base, past decisions, current data, and live tools — so they walk into every meeting already fully prepared.

    In technical terms, context engineering means designing the entire information environment an AI model operates in — including memory, conversation history, retrieved documents, live API data, user profiles, and governance rules — all assembled dynamically before each query. Gartner made it official in July 2025, declaring “context engineering is in, and prompt engineering is out” as the defining shift for AI leaders.

    Why Is It Exploding Right Now?

    The momentum behind context engineering in 2026 is driven by one simple realization: AI is only as good as what it knows at the moment it responds.

    • Hallucination reduction: Systems with structured retrieval and memory show significantly lower hallucination rates by grounding answers in real enterprise data rather than guessing
    • Agentic AI needs it: As agentic AI grows, agents must carry institutional memory — definitions, workflows, past decisions — across long tasks. Context engineering provides that backbone
    • Scalability: AI went from answering isolated questions to becoming a reliable system component — plugging into logging tools, live metrics, and escalation policies — only because of context engineering
    • Enterprise adoption: Organizations in 2026 are investing in semantic layers, context graphs, and active metadata platforms to turn their institutional knowledge into machine-readable context any AI system can use
    • Performance gains: In 2026, the biggest AI performance improvements come from dynamic context selection, compression, and memory management — not from cleverly worded prompts

    Real-World Applications You’ll See Everywhere

    Context engineering is quietly powering the most reliable AI deployments of 2026:

    • Customer Support AI: Instead of a generic chatbot, a context-engineered system knows your account history, past complaints, current order status, and company policies — all before you finish typing
    • Legal & Compliance: AI systems pull the latest regulations, company policies, and case history as live context — delivering advice grounded in current reality, not outdated training data
    • Healthcare: Clinical AI assembles a patient’s full history, latest lab results, and treatment guidelines as context before making a recommendation — dramatically reducing errors
    • Developer Tools: Coding assistants like Cursor don’t just autocomplete — they understand your entire codebase, architecture decisions, and coding standards as persistent context
    • Research: AI agents pull live papers, datasets, and prior findings as context — synthesizing across sources rather than relying on what they were trained on months ago

    What This Means for You

    The organizations pulling ahead in 2026 are not the ones with the biggest AI budgets. They are the ones that have turned their institutional knowledge into machine-readable context that any AI system can use at any time.

    If prompt engineering was about talking to AI better, context engineering is about building smarter environments for AI to operate in. The question to ask yourself is no longer “How do I phrase this better?” — it’s “What does my AI need to know, and how do I make sure it always has it?”


    References:
    Atlan. (2026, March 2). What is context engineering? Complete 2026 guide. https://atlan.com/know/what-is-context-engineering/
    Sombra. (2026, January 22). The guide to AI context engineering in 2026. https://sombrainc.com/blog/ai-context-engineering-guide