IS 5320 – Hrishabh Kulkarni

Hrishabh Kulkarni – IS 5320

Category: Uncategorized

  • Summary Post – H7

    HW7 Summary

    Time Log – Visits to Classmates’ Sites

    Date: February 18, 2026 From: 6:10pm To: 6:55pm
    Date: February 18, 2026 From: 7:45pm To: 8:30pm
    Date: February 20, 2026 From: 5:30pm To: 6:20pm
    Date: February 20, 2026 From: 10:00am To: 11:15am
    Date: February 20, 2026 From: 11:15am To: 11:35am

    Essay I: Summary of Activities and New Content

    This week I created two new blog posts focused on the latest and most talked about trends in artificial intelligence for 2026. The first post explores Physical AI, which is about how artificial intelligence is now being combined with robotics to create machines that can sense, think, and act in the real world. The second post covers DeepSeek and the open source AI movement, explaining how free AI models are disrupting the technology industry and giving developers around the world access to powerful AI tools without expensive subscriptions. Both posts include properly sourced free images, two relevant external links each, and are organized under the correct categories and tags for better site navigation. I also updated the general menu for visitors and the HW7 submenu for grading purposes, and ensured that commenting is enabled on all new posts.

    New Content Links This Week:

    Essay II: Summary of GA4 Event Setup

    This week I set up Google Tag Manager on my WordPress site by installing the GTM container code snippet in the header and body of my web pages using the WP Insert Headers and Footers plugin. After successfully linking Tag Manager to my Google Analytics 4 property using the GA4 Configuration tag with my G- Measurement ID, I created a custom GA4 Event tag to track when visitors navigate to a specific page on my site. I configured the event trigger using a Page View condition where the Page URL contains the path of one of my new posts, so any time a user lands on that page, a custom event fires and gets recorded in GA4. I then used the Preview mode in Google Tag Manager to verify that the tag was firing correctly before publishing the container. Within a few hours, the custom event started appearing in the Realtime report section of GA4 under “Event count by Event name,” confirming the setup was working as intended.

    Essay III: Best Use Case for Custom Events in GA4

    One of the most practical and widely used examples of custom events in GA4 is tracking how users engage with specific content pages, which is especially useful for blogs and editorial websites like ours. For example, a website can create a custom event called “article_read” that fires only when a visitor scrolls past 75% of a blog post, indicating they actually read the content rather than just landing on the page and leaving. This kind of event gives website owners much more meaningful data than a simple page view because it measures genuine engagement instead of just traffic. By combining this custom event with parameters like the post title and category, the site owner can identify which topics keep readers most engaged and use that information to guide future content decisions. This use case directly applies to our course website because tracking which AI posts hold readers’ attention the longest can help us create better and more relevant content each week.

  • DeepSeek and Open Source AI

    DeepSeek and Open Source AI – Why Free AI Models Are Taking Over

    Introduction

    For a long time, the most powerful AI models in the world were locked behind expensive subscriptions and controlled by a handful of big tech companies. DeepSeek, a Chinese AI startup, changed everything by releasing a powerful AI model for free that anyone in the world can use and modify.

    What Is Open Source AI?

    Open source AI means that the code, architecture, and weights of an AI model are made publicly available for anyone to download, use, and build upon. This is the opposite of closed models like OpenAI’s ChatGPT or Google’s Gemini, where access is controlled and users must pay to use the technology through an API.

    DeepSeek’s Big Moment

    DeepSeek’s initial release in January 2025 sent shockwaves through the technology industry, wiping $593 billion from NVIDIA’s market value in a single day. The model proved that a smaller, more efficient AI system could match the performance of expensive models built by companies spending billions of dollars on computing hardware.

    Why It Matters for Developers

    For startups and independent developers, DeepSeek’s open source approach means they can run high-quality AI locally on their own servers without paying for expensive API access. Industries like healthcare and finance, where data privacy is critical, benefit especially because sensitive information never has to leave their own secure systems.

    The Broader Impact on AI

    DeepSeek’s success has proven that AI progress does not depend only on massive computing power and billion-dollar budgets. It has accelerated the open-source AI movement and pushed companies like Meta with its Llama models to also release competitive free alternatives, increasing healthy competition across the entire AI industry.

    What’s Next for Open Source AI

    Reuters reports that 2026 will see a new wave of low-cost, high-performance Chinese AI models following DeepSeek’s footsteps, further intensifying competition with American AI companies. This shift in the AI landscape means more people around the world will have access to powerful AI tools regardless of their budget or location.

    Useful Links:

    Article and Image References:

    Georgia State University. (2025, February 4). How DeepSeek is changing the AI landscape. GSU News. https://news.gsu.edu/2025/02/04/how-deepseek-is-changing-the-a-i-landscape/

    Reuters. (2026, February 12). A year on from DeepSeek shock, get set for flurry of low-cost Chinese AI models. Reuters. https://www.reuters.com/world/china/year-deepseek-shock-get-set-flurry-low-cost-chinese-ai-models-2026-02-12/

    https://www.freepik.com/free-photo/double-exposure-caucasian-man-virtual-reality-vr-headset-is-presumably-gamer-hacker-cracking-code-into-secure-network-server-with-lines-code_10139138.htm#fromView=search&page=1&position=6&uuid=ac7ebbf7-8baa-49f4-aa6a-d2eae5eb97cd&query=open+source+technologyartificial+intelligence+code

  • Physical AI

    Physical AI When Artificial Intelligence Meets the Real World

    Introduction

    Artificial intelligence has mostly lived inside computers and phones, helping us write emails or answer questions. Physical AI is a new and exciting development where AI moves beyond screens and actually interacts with the real world through robots, vehicles, and machines.

    What Is Physical AI?

    Physical AI refers to artificial intelligence systems that can perceive, understand, reason about, and act within the physical world in real time. Unlike traditional robots that simply follow pre-programmed instructions, Physical AI systems learn from their environment and adapt their behavior based on what they see and sense.

    How It Works

    These systems combine computer vision, natural language processing, and motor control into a single intelligent framework called a vision-language-action model. Specialized processors built into the robots allow them to make quick decisions on their own without needing to connect to the cloud for every action.

    Real-World Applications

    Physical AI is already being tested in warehouses, manufacturing plants, and logistics centers where robots autonomously sort, carry, and organize items. At CES 2026, NVIDIA announced that robotics is approaching its “ChatGPT moment,” releasing open frameworks and models specifically designed to make physical AI robots easier to build and deploy.

    Why It Matters in 2026

    Companies like NVIDIA, Arm, and Qualcomm are heavily investing in physical AI, signaling that this technology is moving from lab experiments to real-world deployment at scale. Deloitte predicts that over the next two years, physical AI will expand far beyond traditional industries into healthcare, consumer products, and public infrastructure.

    What Comes Next

    The next big milestone will be humanoid robots powered by agentic AI brains that can plan complex tasks, recover from mistakes, and operate in unpredictable environments. This convergence of physical AI and agentic AI is expected to redefine automation across nearly every industry over the coming decade.

    Useful Links:

    Article And Image References:

    Deloitte. (2026, February 8). AI goes physical: Navigating the convergence of AI and robotics. Deloitte Insights. https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/physical-ai-humanoid-robots.html

    Schmelzer, R. (2026, January 10). Physical AI made waves at CES 2026. What is it? Forbes. https://www.forbes.com/sites/ronschmelzer/2026/01/10/
    physical-ai-made-waves-at-ces-2026-what-is-it/

    https://unsplash.com/photos/a-modern-black-robot-with-blue-accents-and-orange-accents-36P9drFnbRs