<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Ai on Daydream in Boston</title>
    <link>https://blog.achiwa.co.uk/tags/ai/</link>
    <description>Recent content in Ai on Daydream in Boston</description>
    <generator>Hugo -- 0.152.1</generator>
    <language>ja-jp</language>
    <lastBuildDate>Sun, 22 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://blog.achiwa.co.uk/tags/ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Advanced Prompting guide</title>
      <link>https://blog.achiwa.co.uk/posts/advprompt/</link>
      <pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.achiwa.co.uk/posts/advprompt/</guid>
      <description>&lt;p&gt;To be honest, the word &amp;ldquo;Prompt Engineering&amp;rdquo; sounded like a bit black magic or even alchemy to me; It&amp;rsquo;s more of an art than a science.&lt;/p&gt;
&lt;p&gt;But as all IT engineers are required to improve our productivity using AIs, I decided to learn it and go a little deeper on the subject.&lt;/p&gt;
&lt;p&gt;This article is my learning notes as well as a practical guide for advanced prompting.  In later sections, I have added insights that I gained through my prompt usage and Q&amp;amp;As with (or &amp;ldquo;interrogating&amp;rdquo;) AIs.&lt;/p&gt;</description>
    </item>
    <item>
      <title>RAG workflow PoC - Python, ChromaDB, Ollama</title>
      <link>https://blog.achiwa.co.uk/posts/ragpoc/</link>
      <pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.achiwa.co.uk/posts/ragpoc/</guid>
      <description>&lt;p&gt;In the previous post, I learned how LLM works from LLM itself (Copilot, to be more specific).  In this post, I will learn about RAG, of course, from Copilot.&lt;/p&gt;
&lt;p&gt;Methodology:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ask questions starting from &amp;ldquo;What is RAG?&amp;rdquo; to Copilot, and read answers&lt;/li&gt;
&lt;li&gt;Write down my understanding, asking Copilot if I&amp;rsquo;m correct&lt;/li&gt;
&lt;li&gt;Revise my writing until my understanding is solid&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&amp;rsquo;s start today&amp;rsquo;s learning.&lt;/p&gt;
&lt;h2 id=&#34;what-is-rag&#34;&gt;What is RAG?&lt;/h2&gt;
&lt;p&gt;RAG (Retrieval-Augmented Generation) is a workflow for retrieving relevant external information and injecting it into user questions for LLM to consume.  RAG supplements these LLM&amp;rsquo;s weak points:&lt;/p&gt;</description>
    </item>
    <item>
      <title>How GenAI works - Transformer internals</title>
      <link>https://blog.achiwa.co.uk/posts/howllm/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.achiwa.co.uk/posts/howllm/</guid>
      <description>&lt;p&gt;I didn&amp;rsquo;t really like GenAI because it hallucinates, consumes lots of energy, has raised memory and SSD prices, etc.  But as an IT engineer, I can&amp;rsquo;t ignore it.  In this post, I&amp;rsquo;ll try to learn how GenAI (LLM) works by asking a lot of questions to AI.&lt;/p&gt;
&lt;p&gt;I mainly use Copilot because it is most lenient with hourly/daily usage limit.  Below are mainly outputs from Copilot (but I modified/summarized them).  Sorry if I didn&amp;rsquo;t remove all hallucinations.  As it turned out, LLM as a technology is pretty interesting.  Let&amp;rsquo;s see if I can learn something complex as LLM from AI.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
