<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Ai on IQLAS</title>
    <link>/tags/ai/</link>
    <description>Recent content in Ai on IQLAS</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 11 Apr 2026 17:17:57 +0530</lastBuildDate>
    <atom:link href="/tags/ai/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Transformer Architecture: From Attention to Modern AI</title>
      <link>/blog/the-transformer-architecture-from-attention-to-modern-ai/</link>
      <pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate>
      <guid>/blog/the-transformer-architecture-from-attention-to-modern-ai/</guid>
      <description>The 2017 paper &amp;lsquo;Attention Is All You Need&amp;rsquo; fundamentally changed AI. Here is a clear-eyed, technically precise explanation of how the transformer works — and why it works so well.</description>
    </item>
    <item>
      <title>The Hallucination Problem: Why LLMs Confabulate and What We Can Do About It</title>
      <link>/blog/the-hallucination-problem-why-llms-confabulate-and-what-we-can-do-about-it/</link>
      <pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate>
      <guid>/blog/the-hallucination-problem-why-llms-confabulate-and-what-we-can-do-about-it/</guid>
      <description>Large language models produce false statements with confident fluency. Understanding why this happens — and what mitigation strategies actually work — requires thinking carefully about what these models are, and are not.</description>
    </item>
    <item>
      <title>Retrieval-Augmented Generation: Building LLMs That Know What They Don&#39;t Know</title>
      <link>/blog/retrieval-augmented-generation-building-llms-that-know-what-they-dont-know/</link>
      <pubDate>Thu, 15 Jan 2026 00:00:00 +0000</pubDate>
      <guid>/blog/retrieval-augmented-generation-building-llms-that-know-what-they-dont-know/</guid>
      <description>RAG is not a buzzword — it is a practical architecture that grounds language models in verified knowledge. Here is how it works, why the naive version fails, and what production RAG actually looks like.</description>
    </item>
    <item>
      <title>Camus, Kafka, and the Algorithm: Absurdism in the Age of AI</title>
      <link>/blog/camus-kafka-and-the-algorithm-absurdism-in-the-age-of-ai/</link>
      <pubDate>Fri, 05 Dec 2025 00:00:00 +0000</pubDate>
      <guid>/blog/camus-kafka-and-the-algorithm-absurdism-in-the-age-of-ai/</guid>
      <description>Kafka wrote bureaucracies that crush individuals through incomprehensible procedure. Camus wrote a universe that answers human longing with silence. What happens when the algorithm inherits both roles?</description>
    </item>
    <item>
      <title>Writing in the Age of AI: What the Machine Cannot Imitate</title>
      <link>/blog/writing-in-the-age-of-ai-what-the-machine-cannot-imitate/</link>
      <pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate>
      <guid>/blog/writing-in-the-age-of-ai-what-the-machine-cannot-imitate/</guid>
      <description>AI-generated text is fluent, coherent, and statistically reasonable. It is also missing something specific. Naming what that is matters — not to reassure writers, but to clarify what writing is actually for.</description>
    </item>
  </channel>
</rss>
