<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>devkuma – AI</title>
    <link>https://www.devkuma.com/en/tags/ai/</link>
    
    <description>Recent content in AI on devkuma</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <managingEditor>kc@example.com (kc kim)</managingEditor>
    <webMaster>kc@example.com (kc kim)</webMaster>
    <copyright>The devkuma</copyright>
    
	  <atom:link href="https://www.devkuma.com/en/tags/ai/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>OpenAI ChatGPT Explained</title>
      <link>https://www.devkuma.com/en/docs/open-ai/chat-gpt/</link>
      <pubDate>Sun, 26 Apr 2026 15:49:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/open-ai/chat-gpt/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-chatgpt&#34;&gt;What Is ChatGPT?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;ChatGPT&lt;/strong&gt; is a conversational artificial intelligence system developed by OpenAI. Its purpose is to understand human language and generate natural responses. Unlike simple search-based services, it is distinguished by its ability to understand the user&amp;rsquo;s intent and provide more sophisticated answers that reflect context.&lt;/p&gt;
&lt;h2 id=&#34;concept-and-background-of-chatgpt&#34;&gt;Concept and Background of ChatGPT&lt;/h2&gt;
&lt;p&gt;Artificial intelligence technology has developed for a long time, and the field of natural language processing (NLP) has grown especially rapidly in recent years. ChatGPT emerged in this flow and focuses on implementing human-like conversation based on a large language model (LLM).&lt;/p&gt;
&lt;p&gt;Where traditional search engines list information around keywords, ChatGPT understands the context of a question and generates a complete answer. In other words, it is closer to a technology that &amp;ldquo;understands and reconstructs&amp;rdquo; information than simply &amp;ldquo;finding&amp;rdquo; it.&lt;/p&gt;
&lt;h2 id=&#34;how-chatgpt-works&#34;&gt;How ChatGPT Works&lt;/h2&gt;
&lt;p&gt;At the core of ChatGPT is an AI model trained on massive amounts of text data. This model learns the relationships and flow among words in sentences and operates by predicting the words or sentences most likely to come next when a particular sentence is given.&lt;/p&gt;
&lt;p&gt;For example, when a user enters a question, ChatGPT analyzes the meaning of the sentence and selects the most natural and appropriate sentences from many possibilities to compose a suitable answer. This process happens in a very short time and gives the user an experience similar to talking with a person.&lt;/p&gt;
&lt;h2 id=&#34;main-features&#34;&gt;Main Features&lt;/h2&gt;
&lt;h3 id=&#34;natural-conversational-ability&#34;&gt;Natural Conversational Ability&lt;/h3&gt;
&lt;p&gt;ChatGPT does not merely generate sentences. It can remember the context of previous conversation and continue the discussion. This allows users to ask follow-up questions on a single topic and gradually receive deeper answers.&lt;/p&gt;
&lt;h3 id=&#34;broad-usability&#34;&gt;Broad Usability&lt;/h3&gt;
&lt;p&gt;This system is not limited to a specific field and can be used across a very wide range of areas. Representative uses include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Blog and content writing&lt;/li&gt;
&lt;li&gt;Programming code generation and debugging&lt;/li&gt;
&lt;li&gt;Document summarization and translation&lt;/li&gt;
&lt;li&gt;Learning support and concept explanation&lt;/li&gt;
&lt;li&gt;Idea generation and planning support&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this way, ChatGPT is used as a productivity-enhancing tool in many environments, from individual users to companies.&lt;/p&gt;
&lt;h3 id=&#34;fast-response-speed&#34;&gt;Fast Response Speed&lt;/h3&gt;
&lt;p&gt;Because it can generate answers to complex questions in a short time, it is also valuable as a real-time communication tool.&lt;/p&gt;
&lt;h2 id=&#34;advantages-of-chatgpt&#34;&gt;Advantages of ChatGPT&lt;/h2&gt;
&lt;p&gt;The greatest advantage of ChatGPT is that &lt;strong&gt;it goes beyond simply providing information and reconstructs it in an easy-to-understand form&lt;/strong&gt;. This is especially helpful in fields that require specialized knowledge.&lt;/p&gt;
&lt;p&gt;It can also automate repetitive tasks or quickly generate drafts when ideas are needed, greatly improving work efficiency. This is why many users, including developers, writers, marketers, and students, actively use it.&lt;/p&gt;
&lt;h2 id=&#34;limitations-and-precautions&#34;&gt;Limitations and Precautions&lt;/h2&gt;
&lt;p&gt;However, ChatGPT is not a perfect system. It also has several limitations.&lt;/p&gt;
&lt;p&gt;First, not every answer can be guaranteed to be correct. Because the model generates sentences probabilistically, it may include content that differs from actual facts.
Second, reflection of the latest information may be limited.
Third, in areas requiring professional judgment such as law, medicine, and finance, it should be used as reference material, and additional verification is essential.&lt;/p&gt;
&lt;p&gt;Therefore, when using ChatGPT, users should critically review the results and check important information through separate reliable sources.&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;ChatGPT can be considered a next-generation AI tool that goes beyond a simple chatbot, understands human language, and generates new information based on it. In particular, because anyone can easily access it through a conversational interface, it is expected to bring major changes to the way information is used.&lt;/p&gt;
&lt;p&gt;As artificial intelligence technology continues to advance, systems like ChatGPT will play a key role not only in everyday life but also in various industries. In this flow, it is also important for users to develop the ability to understand and use the tool properly.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
    </item>
    
    <item>
      <title>OpenAI</title>
      <link>https://www.devkuma.com/en/docs/ai/open-api/</link>
      <pubDate>Sun, 26 Apr 2026 15:45:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/open-api/</guid>
      <description>
        
        
        &lt;p&gt;An introduction to OpenAI&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Claude Explained</title>
      <link>https://www.devkuma.com/en/docs/ai/claude/overview/</link>
      <pubDate>Fri, 07 Nov 2025 11:51:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/claude/overview/</guid>
      <description>
        
        
        &lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Claude&lt;/strong&gt; is part of the artificial intelligence (AI) language model (LLM) family and was developed by the U.S. AI research company &lt;strong&gt;Anthropic&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It was first released in March 2023 and has since evolved through several versions, including Claude 2, Claude 3, and Claude 4.&lt;/li&gt;
&lt;li&gt;One development goal is &lt;strong&gt;safety&lt;/strong&gt; and &lt;strong&gt;responsible AI use&lt;/strong&gt;, and for this it adopts the &amp;ldquo;Constitutional AI&amp;rdquo; approach.&lt;/li&gt;
&lt;li&gt;The name &lt;strong&gt;Claude&lt;/strong&gt; comes from &lt;strong&gt;Claude Shannon&lt;/strong&gt;, the French mathematician and pioneer of information theory.&lt;/li&gt;
&lt;li&gt;Like ChatGPT, it is a &lt;strong&gt;conversational AI model&lt;/strong&gt; that performs text-based conversation, summarization, translation, code writing, document analysis, and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;versions-and-evolution&#34;&gt;Versions and Evolution&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Version&lt;/th&gt;
          &lt;th&gt;Release period&lt;/th&gt;
          &lt;th&gt;&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Claude 1 (initial)&lt;/td&gt;
          &lt;td&gt;Early 2023&lt;/td&gt;
          &lt;td&gt;Basic release.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Claude 2&lt;/td&gt;
          &lt;td&gt;July 2023&lt;/td&gt;
          &lt;td&gt;Improved performance and response length.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Claude 3 family (Opus, Sonnet, Haiku)&lt;/td&gt;
          &lt;td&gt;2024&lt;/td&gt;
          &lt;td&gt;Model lineup by performance and speed. More diversified by use case.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Claude 4 family (for example, Opus 4, Sonnet 4.5)&lt;/td&gt;
          &lt;td&gt;2025&lt;/td&gt;
          &lt;td&gt;Strong for advanced tasks such as code generation.&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;features-and-advantages&#34;&gt;Features and Advantages&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Designed to perform many tasks such as natural language conversation, code generation, data analysis, and image input processing.&lt;/li&gt;
&lt;li&gt;Recent versions have a much larger &lt;strong&gt;context window&lt;/strong&gt;, making them stronger at handling long context and solving complex problems. For example, Claude Sonnet 4.5 improved input/output token pricing and context window characteristics.&lt;/li&gt;
&lt;li&gt;Practical use is also considered, including APIs for developers and enterprise customers, agent usage, and tool integration.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;main-features&#34;&gt;Main Features&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Focus on safety and transparency&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Focuses on avoiding harmful answers and providing reasonable, explainable responses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Long context processing&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Claude 3 series can understand up to &lt;strong&gt;200,000 tokens, about 150-200 pages&lt;/strong&gt;, making it strong for analyzing long documents.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Intuitive conversation&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It has strong language ability in naturally understanding the user&amp;rsquo;s intent and nuance.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enterprise API support&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Supports workflow automation and document processing through integrations with Slack, Notion, Zapier, and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;differences-from-chatgpt&#34;&gt;Differences from ChatGPT&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Category&lt;/th&gt;
          &lt;th&gt;ChatGPT (OpenAI)&lt;/th&gt;
          &lt;th&gt;Claude (Anthropic)&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Developer&lt;/td&gt;
          &lt;td&gt;OpenAI&lt;/td&gt;
          &lt;td&gt;Anthropic&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Representative models&lt;/td&gt;
          &lt;td&gt;GPT-4, GPT-5&lt;/td&gt;
          &lt;td&gt;Claude 3 series&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Philosophy&lt;/td&gt;
          &lt;td&gt;Efficiency and accuracy focused&lt;/td&gt;
          &lt;td&gt;Safety and human-centered design&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Strengths&lt;/td&gt;
          &lt;td&gt;Code writing, many integrations&lt;/td&gt;
          &lt;td&gt;Document understanding, ethical control&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Context length&lt;/td&gt;
          &lt;td&gt;About 128k tokens (GPT-4 Turbo)&lt;/td&gt;
          &lt;td&gt;About 200k tokens (Claude 3)&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Companies use it for coding automation, document summarization, data analysis, chatbot responses, and more. It is also mentioned for code refactoring and bug fixing.&lt;/li&gt;
&lt;li&gt;General users can also have natural conversations through web and mobile chat interfaces.&lt;/li&gt;
&lt;li&gt;It is preparing to enter the Korean market, and cooperation with Korean companies and policy organizations is also underway.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;limitations-and-notes&#34;&gt;Limitations and Notes&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;It is not yet a &amp;ldquo;complete artificial intelligence&amp;rdquo; (AGI) or a self-aware system, and errors or false information (&amp;ldquo;hallucinations&amp;rdquo;) may occur. For example, cases have been reported where a model generated incorrect legal citations.&lt;/li&gt;
&lt;li&gt;For corporate or organizational use, licensing, data security, and regulatory compliance are important.&lt;/li&gt;
&lt;li&gt;Safety and ethical considerations are required from companies and developers, and access restrictions for companies in certain countries are also being discussed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;notes-for-korean-users&#34;&gt;Notes for Korean Users&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Korean input and output are possible, but response quality may be somewhat lower than in native-language environments such as English, so review is needed for important work.&lt;/li&gt;
&lt;li&gt;Although entry into the Korean market is planned, a domestic specialized version or full localization may not yet be complete.&lt;/li&gt;
&lt;li&gt;Depending on the purpose of use, it may be divided into free or paid plans, and APIs may incur costs.&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Why AI Models Use GPUs</title>
      <link>https://www.devkuma.com/en/docs/ai/gpu/</link>
      <pubDate>Sat, 30 Aug 2025 17:35:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/gpu/</guid>
      <description>
        
        
        &lt;p&gt;AI models, especially deep learning models, must perform &lt;strong&gt;enormous amounts of computation&lt;/strong&gt; to train on and infer from massive data. Since using only CPUs is too slow and inefficient for this work, &lt;strong&gt;GPUs specialized for large-scale parallel computation&lt;/strong&gt; are essential.&lt;/p&gt;
&lt;h2 id=&#34;what-is-a-gpu&#34;&gt;What Is a GPU?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;GPU (Graphics Processing Unit)&lt;/strong&gt; was originally designed to quickly perform &lt;strong&gt;graphics operations&lt;/strong&gt;, such as pixel rendering and 3D graphics processing.&lt;/li&gt;
&lt;li&gt;The reason graphics in games and videos can appear smooth without stuttering is also the GPU&amp;rsquo;s fast parallel computation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;cpu-vs-gpu-structure-comparison&#34;&gt;CPU vs GPU Structure Comparison&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Category&lt;/th&gt;
          &lt;th&gt;CPU&lt;/th&gt;
          &lt;th&gt;GPU&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Number of cores&lt;/td&gt;
          &lt;td&gt;A few high-performance cores (4-32)&lt;/td&gt;
          &lt;td&gt;Thousands to tens of thousands of small cores&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Processing method&lt;/td&gt;
          &lt;td&gt;Sequential processing&lt;/td&gt;
          &lt;td&gt;Parallel processing&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Strengths&lt;/td&gt;
          &lt;td&gt;General logic processing, complex branch handling&lt;/td&gt;
          &lt;td&gt;Large volumes of simple repetitive operations, matrix/vector operations&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Suitability for AI&lt;/td&gt;
          &lt;td&gt;Low&lt;/td&gt;
          &lt;td&gt;Very high&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The &lt;strong&gt;matrix multiplication and vector operations&lt;/strong&gt; required for deep learning training fit perfectly with the GPU&amp;rsquo;s parallel processing method.&lt;/p&gt;
&lt;h2 id=&#34;why-gpus-are-essential-for-ai&#34;&gt;Why GPUs Are Essential for AI&lt;/h2&gt;
&lt;h3 id=&#34;parallel-computing-ability&#34;&gt;Parallel Computing Ability&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Deep learning models must simultaneously calculate many parameters and connections between neurons.&lt;/li&gt;
&lt;li&gt;GPUs process these all at once in parallel, dramatically improving speed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;matrixvector-operation-optimization&#34;&gt;Matrix/Vector Operation Optimization&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Neural networks are made up of many &lt;strong&gt;matrix&lt;/strong&gt; and &lt;strong&gt;vector&lt;/strong&gt; multiplications.&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;y = Wx + b&lt;/code&gt; (weight matrix x input vector + bias)&lt;/li&gt;
&lt;li&gt;GPUs were originally optimized for matrix operations for &lt;strong&gt;graphics processing&lt;/strong&gt;, such as pixel calculation and 3D rendering, so they fit AI computation well.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;shorter-training-time&#34;&gt;Shorter Training Time&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;During model training, &lt;strong&gt;millions to billions of parameters&lt;/strong&gt; must be updated.&lt;/li&gt;
&lt;li&gt;Training that would take weeks or months using only CPUs can be reduced to &lt;strong&gt;hours or days&lt;/strong&gt; with GPUs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;large-scale-data-processing&#34;&gt;Large-Scale Data Processing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;AI handles &lt;strong&gt;high-dimensional data&lt;/strong&gt; such as images, speech, and text.&lt;/li&gt;
&lt;li&gt;GPUs can process large batches of data simultaneously, making training and inference faster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;stronger-inference-performance&#34;&gt;Stronger Inference Performance&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;GPUs provide fast responses not only for training but also for &lt;strong&gt;real-time services&lt;/strong&gt;, such as chatbot responses, image/speech recognition, and autonomous driving sensor data analysis.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;ecosystem-support&#34;&gt;Ecosystem Support&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Representative deep learning frameworks such as &lt;strong&gt;PyTorch&lt;/strong&gt; and &lt;strong&gt;TensorFlow&lt;/strong&gt; are optimized for GPUs based on &lt;strong&gt;CUDA, NVIDIA&amp;rsquo;s library&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;When using GPUs, optimized kernels can be used automatically, providing additional performance gains.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;relationship-between-gpus-and-ai&#34;&gt;Relationship Between GPUs and AI&lt;/h2&gt;
&lt;p&gt;Early AI researchers realized that CPUs alone had limitations when training on large amounts of data. They applied &lt;strong&gt;graphics-processing GPUs to deep learning training&lt;/strong&gt;, and the parallel computation structure matched AI perfectly.
Since then, &lt;strong&gt;AI and GPUs have become inseparable&lt;/strong&gt;, and most AI research and services today are built on GPU-based systems.&lt;/p&gt;
&lt;h2 id=&#34;ai-specific-hardware-beyond-gpus&#34;&gt;AI-Specific Hardware Beyond GPUs&lt;/h2&gt;
&lt;p&gt;Recently, &lt;strong&gt;AI-specialized chips&lt;/strong&gt; other than GPUs have also been developed and used.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TPU (Tensor Processing Unit)&lt;/strong&gt;: Developed by Google and optimized for tensor operations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NPU (Neural Processing Unit)&lt;/strong&gt;: For mobile devices and optimized for energy efficiency&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FPGA, ASIC&lt;/strong&gt;: Custom chips specialized for specific AI operations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, &lt;strong&gt;GPUs are still the most widely used in terms of versatility, performance, and ecosystem&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CPU = strong at sequential processing (general-purpose processor)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GPU = specialized for parallel computation (optimized for AI and deep learning)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The reason AI models use GPUs = to process large-scale matrix/vector operations quickly and simultaneously&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>What Is Gemini?</title>
      <link>https://www.devkuma.com/en/docs/ai/gemini/</link>
      <pubDate>Sat, 30 Aug 2025 17:05:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/gemini/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-google-gemini&#34;&gt;What Is Google Gemini?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Gemini&lt;/strong&gt; is a multimodal large language model (LLM) developed by Google by integrating the capabilities of the &lt;strong&gt;DeepMind&lt;/strong&gt; and &lt;strong&gt;Brain&lt;/strong&gt; teams.&lt;/li&gt;
&lt;li&gt;Its key feature is that it can understand and process many forms of information, including text, audio, images, and video.&lt;/li&gt;
&lt;li&gt;It was first released in December 2023, and &lt;strong&gt;Gemini 1.0&lt;/strong&gt; launched in three versions: &lt;strong&gt;Ultra&lt;/strong&gt;, &lt;strong&gt;Pro&lt;/strong&gt;, and &lt;strong&gt;Nano&lt;/strong&gt;. These targeted complex tasks, general-purpose tasks, and on-device processing respectively.&lt;/li&gt;
&lt;li&gt;It has continued to evolve rapidly, and &lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt; and &lt;strong&gt;2.5 Pro&lt;/strong&gt; are currently used as major versions. Flash focuses on response speed, while Pro provides advanced reasoning and code generation capabilities, with improved audio output and security features.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://gemini.google.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://gemini.google.com/&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/gemini.jpg&#34; alt=&#34;Google Gemini&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;main-features-of-gemini&#34;&gt;Main Features of Gemini&lt;/h2&gt;
&lt;h3 id=&#34;multimodality&#34;&gt;&lt;strong&gt;Multimodality&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Unlike earlier AI models that were mainly limited to text, Gemini can understand and process text, images, audio, and video together in an integrated way.&lt;/li&gt;
&lt;li&gt;For example, users can ask questions while watching a video or provide images and text together to request a specific task.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;image-editing-nano-banana--gemini-25-flash-image&#34;&gt;Image Editing (Nano-Banana / Gemini 2.5 Flash Image)&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Gemini 2.5 Flash Image&lt;/strong&gt; model, called &amp;ldquo;&lt;strong&gt;Nano-Banana&lt;/strong&gt;,&amp;rdquo; lets users edit or composite images with natural language and provides advanced features that preserve characteristics such as faces and objects consistently.&lt;/li&gt;
&lt;li&gt;For example, it can combine multiple images, change backgrounds, and modify styles or clothing. AI-generated images include visible or invisible watermarks so generation can be verified.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=&#34;voice-and-voice-interaction&#34;&gt;Voice and Voice Interaction&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Gemini Live&lt;/strong&gt; feature is a real-time conversational interface using voice, and it can be used with screen and camera sharing, especially on Pixel 9.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;various-models&#34;&gt;Various Models&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Gemini is divided into several models depending on the purpose of use.
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Gemini Ultra:&lt;/strong&gt; The most powerful model optimized for complex tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gemini Pro:&lt;/strong&gt; A balanced-performance model that can be used for a wide range of tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gemini Flash:&lt;/strong&gt; A model suitable for tasks where cost efficiency and fast response speed are important.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;strong-performance&#34;&gt;Strong Performance&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Gemini shows excellent performance in many benchmarks, including complex reasoning, coding, and math problem solving. In particular, it has also shown results surpassing human expert scores on the Massive Multitask Language Understanding (MMLU) benchmark.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;enhanced-everyday-assistant-role&#34;&gt;Enhanced Everyday Assistant Role&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Gemini for Home&lt;/strong&gt; is a new AI-based life assistant replacing Google Assistant. It includes daily routine management, more natural conversation, and smart home device control. Early access is scheduled to begin in October 2025.&lt;/li&gt;
&lt;li&gt;Gemini is also integrated into &lt;strong&gt;Android Auto&lt;/strong&gt;, allowing users to send messages, check email, and perform various tasks with voice commands while driving.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;google-workspace-integration-and-multilingual-support&#34;&gt;Google Workspace Integration and Multilingual Support&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Gemini connects &lt;strong&gt;Gmail&lt;/strong&gt;, &lt;strong&gt;Calendar&lt;/strong&gt;, &lt;strong&gt;Maps&lt;/strong&gt;, &lt;strong&gt;Photos&lt;/strong&gt;, &lt;strong&gt;YouTube&lt;/strong&gt;, and more, helping users work across multiple apps. It also provides features such as schedule management, alarm setting, calls, and presentation practice.&lt;/li&gt;
&lt;li&gt;It currently supports more than 40 languages and can be used through mobile apps (Android, iOS) and the web. &lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt; and &lt;strong&gt;2.5 Pro&lt;/strong&gt; are also provided as paid usage-based models.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;use-cases&#34;&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Gemini can be used in many fields, including:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Creative work:&lt;/strong&gt; Writing, image generation, idea brainstorming, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Learning and research:&lt;/strong&gt; Summarizing complex topics, analyzing papers, creating study plans, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Coding:&lt;/strong&gt; Code generation, debugging, optimization, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customer service:&lt;/strong&gt; Providing accurate and useful answers to questions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Gemini can be used with various Google cloud services such as Google AI Studio and Vertex AI, and it is also embedded in Google&amp;rsquo;s AI assistant, Gemini.&lt;/p&gt;
&lt;h2 id=&#34;competitiveness-and-comparison-points&#34;&gt;Competitiveness and Comparison Points&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Google has announced that Gemini delivers benchmark performance similar to or higher than OpenAI&amp;rsquo;s GPT-4, but actual user experience may differ by use case.&lt;/li&gt;
&lt;li&gt;Gemini is especially differentiated by its &lt;strong&gt;multimodal design&lt;/strong&gt;, &lt;strong&gt;long context window&lt;/strong&gt;, and &lt;strong&gt;enhanced image and voice processing capabilities&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Area&lt;/th&gt;
          &lt;th&gt;Feature Summary&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Multimodal processing&lt;/td&gt;
          &lt;td&gt;Can understand and generate text, images, audio, video, and code&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Model lineup&lt;/td&gt;
          &lt;td&gt;Gemini 1.0 (Ultra/Pro/Nano) -&amp;gt; latest versions such as 2.5 Flash / Pro&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Image editing&lt;/td&gt;
          &lt;td&gt;Nano-Banana: natural language-based editing with consistent feature preservation&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Voice interface&lt;/td&gt;
          &lt;td&gt;Gemini Live: voice-based real-time conversation&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Everyday assistant features&lt;/td&gt;
          &lt;td&gt;Gemini for Home, Android Auto voice support&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Workspace integration&lt;/td&gt;
          &lt;td&gt;Integrated with Gmail, Calendar, and more; can connect multiple apps&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Competitiveness&lt;/td&gt;
          &lt;td&gt;Multimodal design, long context, and high benchmarks compared with GPT-4&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Pricing model&lt;/td&gt;
          &lt;td&gt;Free + premium plans, such as Gemini 2.5 Pro&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;future-direction&#34;&gt;Future Direction&lt;/h2&gt;
&lt;p&gt;Gemini is expected to be deeply integrated into many areas, including &lt;strong&gt;smart homes&lt;/strong&gt;, &lt;strong&gt;vehicles&lt;/strong&gt;, &lt;strong&gt;productivity tools&lt;/strong&gt;, and &lt;strong&gt;multimedia generation&lt;/strong&gt;, and new features and versions continue to be released.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>Gemini</category>
      
    </item>
    
    <item>
      <title>Understanding and Using Artificial Intelligence</title>
      <link>https://www.devkuma.com/en/docs/ai/overview/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/overview/</guid>
      <description>
        
        
        &lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Artificial intelligence (AI) is no longer a technology of the future. It is now a core technology deeply embedded in everyday life. This section systematically explains AI from its basic concepts to its latest applications, helping readers understand the nature of AI and use it in practice.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Building an Application with the Gemini API</title>
      <link>https://www.devkuma.com/en/docs/ai/gemini/api/</link>
      <pubDate>Wed, 28 Jan 2026 18:08:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/gemini/api/</guid>
      <description>
        
        
        &lt;h2 id=&#34;issuing-a-gemini-key&#34;&gt;Issuing a Gemini Key&lt;/h2&gt;
&lt;p&gt;To use the Google Gemini API, you need to issue an API key.&lt;/p&gt;
&lt;h3 id=&#34;1-access-the-gemini-developer-api-site&#34;&gt;1. Access the Gemini Developer API Site&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Go to &lt;a href=&#34;https://ai.google.dev/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://ai.google.dev/&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt; and click the &lt;strong&gt;Explore models in Google AI Studio&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;On first access, the terms of service are displayed. Review them and click the &amp;ldquo;Continue&amp;rdquo; button.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/gemini-api-1.png&#34; alt=&#34;Gemini Developer API&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;2-issue-a-get-api-key&#34;&gt;2. Issue a Get API Key&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Select the &lt;strong&gt;Get API Key&lt;/strong&gt; menu to issue a key.&lt;/li&gt;
&lt;li&gt;If you have already issued one, it will appear in the list. If you have not, you can use the &lt;strong&gt;Create API key&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/gemini-api-2.png&#34; alt=&#34;Gemini API Key&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;free-tier&#34;&gt;Free Tier&lt;/h3&gt;
&lt;p&gt;The free tier has usage limits by model.
For gemini-3-flash, the daily maximum number of requests (RPD) is only up to 20.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/gemini-api-3.png&#34; alt=&#34;Gemini API Key Free&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;client-development-using-a-library&#34;&gt;Client Development Using a Library&lt;/h2&gt;
&lt;p&gt;Here, we will look at how to call the API using the Google GenAI SDK with Kotlin.&lt;/p&gt;
&lt;h3 id=&#34;project-creation&#34;&gt;Project Creation&lt;/h3&gt;
&lt;p&gt;Create a project using an IDE tool.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;.
├── build.gradle.kts
├── gradle
│   └── wrapper
│       ├── gradle-wrapper.jar
│       └── gradle-wrapper.properties
├── gradle.properties
├── gradlew
├── gradlew.bat
├── settings.gradle.kts
└── src
    ├── main
    │   ├── kotlin
    │   │   └── Main.kt
    │   └── resources
    └── test
        ├── kotlin
        └── resources
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;adding-the-library&#34;&gt;Adding the Library&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;/build.gradle.kts&lt;/strong&gt;&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-kotlin&#34; data-lang=&#34;kotlin&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;dependencies&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;implementation&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;com.google.genai:google-genai:1.36.0&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;Check the GitHub repository (&lt;a href=&#34;https://github.com/googleapis/java-genai&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;googleapis/java-genai&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;) and use the latest version.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;client-development&#34;&gt;Client Development&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;/src/main/kotlin/Main.kt&lt;/strong&gt;&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-kotlin&#34; data-lang=&#34;kotlin&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;package&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;com.devkuma&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;com.google.genai.Client&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;fun&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;main&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;client&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Client&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;().&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;apiKey&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;GEMINI_API_KEY&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;).&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000&#34;&gt;client&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;models&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;generateContent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;gemini-3-flash-preview&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Explain artificial intelligence in one sentence.&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;null&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;text&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Output:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Artificial intelligence is technology that implements human learning, reasoning, and perception abilities in computer systems so machines can perform intelligent tasks.
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;Put your issued key in &lt;strong&gt;GEMINI_API_KEY&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;rest&#34;&gt;REST&lt;/h2&gt;
&lt;p&gt;The Gemini API can also be called as a REST API.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;curl &amp;#34;https://generativelanguage.googleapis.com/v1beta/models/gemini-3-flash-preview:generateContent&amp;#34; \
  -H &amp;#34;x-goog-api-key: $GEMINI_API_KEY&amp;#34; \
  -H &amp;#39;Content-Type: application/json&amp;#39; \
  -X POST \
  -d &amp;#39;{
    &amp;#34;contents&amp;#34;: [
      {
        &amp;#34;parts&amp;#34;: [
          {
            &amp;#34;text&amp;#34;: &amp;#34;Explain artificial intelligence in one sentence.&amp;#34;
          }
        ]
      }
    ]
  }&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;output:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{
  &amp;#34;candidates&amp;#34;: [
    {
      &amp;#34;content&amp;#34;: {
        &amp;#34;parts&amp;#34;: [
          {
            &amp;#34;text&amp;#34;: &amp;#34;Artificial intelligence is technology that implements human learning, reasoning, and perception abilities in computer systems so they can perform intelligent tasks.&amp;#34;,
            &amp;#34;thoughtSignature&amp;#34;: &amp;#34;Er8OCrwOAXLI2nzqxL3K8LCAB020BPaY+sv89....&amp;#34;
          }
        ],
        &amp;#34;role&amp;#34;: &amp;#34;model&amp;#34;
      },
      &amp;#34;finishReason&amp;#34;: &amp;#34;STOP&amp;#34;,
      &amp;#34;index&amp;#34;: 0
    }
  ],
  &amp;#34;usageMetadata&amp;#34;: {
    &amp;#34;promptTokenCount&amp;#34;: 13,
    &amp;#34;candidatesTokenCount&amp;#34;: 32,
    &amp;#34;totalTokenCount&amp;#34;: 415,
    &amp;#34;promptTokensDetails&amp;#34;: [
      {
        &amp;#34;modality&amp;#34;: &amp;#34;TEXT&amp;#34;,
        &amp;#34;tokenCount&amp;#34;: 13
      }
    ],
    &amp;#34;thoughtsTokenCount&amp;#34;: 370
  },
  &amp;#34;modelVersion&amp;#34;: &amp;#34;gemini-3-flash-preview&amp;#34;,
  &amp;#34;responseId&amp;#34;: &amp;#34;5rh6afO6NYb22roPsKr06Ao&amp;#34;
}
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;references&#34;&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://ai.google.dev/gemini-api/docs&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Gemini API | Google AI for Developers&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The example code above can be found on &lt;a href=&#34;https://github.com/devkuma/kotlin-tutorial/tree/main/gemini-api-tutorial&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;GitHub&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>kotlin</category>
      
      <category>Gemini</category>
      
    </item>
    
    <item>
      <title>MCP Function Calling with the Google Gemini API</title>
      <link>https://www.devkuma.com/en/docs/ai/gemini/api-mcp/</link>
      <pubDate>Fri, 30 Jan 2026 16:28:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/gemini/api-mcp/</guid>
      <description>
        
        
        &lt;h2 id=&#34;how-function-calling-works&#34;&gt;How Function Calling Works&lt;/h2&gt;
&lt;p&gt;The details of how function calling works are explained in the &lt;a href=&#34;https://ai.google.dev/gemini-api/docs/function-calling?hl=ko&amp;amp;_gl=1*15vau2u*_up*MQ..*_ga*MTEzMDM3OTczNi4xNzY5NzQ2MDI4*_ga_P1DBVKWT6V*czE3Njk3NDYwMjgkbzEkZzAkdDE3Njk3NDYwMjgkajYwJGwwJGg3MTQyMTk5NzU.&amp;amp;example=meeting#how-it-works&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;official Gemini API documentation&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/gemini-function-calling-overview.png&#34; alt=&#34;Function calling overview&#34;&gt;&lt;/p&gt;
&lt;p&gt;In brief:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Function calling&lt;/strong&gt; is a structured collaboration method among an application, an LLM, and external functions.&lt;/li&gt;
&lt;li&gt;The application first defines function declarations to describe the function name, parameters, and purpose to the model.&lt;/li&gt;
&lt;li&gt;When the user prompt and function declarations are passed to the &lt;strong&gt;LLM&lt;/strong&gt;, the model decides whether a function call is needed and returns a &lt;strong&gt;structured JSON response&lt;/strong&gt; or a regular text response.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Function execution is the responsibility of the application, not the model&lt;/strong&gt;. The model only provides the function name and arguments.&lt;/li&gt;
&lt;li&gt;When the execution result is sent back to the model, the model reflects it and generates a &lt;strong&gt;user-friendly final answer&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;preparing-the-mcp-server&#34;&gt;Preparing the MCP Server&lt;/h2&gt;
&lt;p&gt;We will use the simple server described in the following document.&lt;br&gt;
&lt;a href=&#34;https://www.devkuma.com/docs/spring-ai/mcp-server-auth/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Adding Authentication to a WebMVC MCP Server Built with Spring and Kotlin&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;client-development&#34;&gt;Client Development&lt;/h2&gt;
&lt;p&gt;The client will use the application created in the following document.&lt;br&gt;
&lt;a href=&#34;https://www.devkuma.com/docs/ai/gemini/api/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Building an Application with the Gemini API&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-kotlin&#34; data-lang=&#34;kotlin&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;package&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;com.devkuma.sample1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;com.google.genai.Client&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;com.google.genai.types.*&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;io.modelcontextprotocol.client.McpClient&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;io.modelcontextprotocol.client.transport.HttpClientStreamableHttpTransport&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;io.modelcontextprotocol.spec.McpSchema.CallToolRequest&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;java.net.http.HttpRequest&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;import&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;java.util.stream.Collectors&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;fun&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;main&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Configuration
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpServerUrl&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;http://localhost:8080&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpApiKey&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;api01.mycustomapikey&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;geminiApiKey&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;GEMINI_API_KEY&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;========================================&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Gemini API + MCP Function Calling Demo&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;========================================&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 1. Initialize MCP client
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;request&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;HttpRequest&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;newBuilder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;header&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Content-Type&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;application/json&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;header&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;X-API-key&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpApiKey&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;transport&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;HttpClientStreamableHttpTransport&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;mcpServerUrl&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;requestBuilder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;request&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;McpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;sync&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;transport&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;initialize&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;ping&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Fetch the list of tools available from the MCP server.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;toolsList&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;listTools&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Available Tools = &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$toolsList&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 2. Initialize Gemini client and configure functions
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionDeclarations&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;toolsList&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;tools&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;().&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;stream&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;map&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;({&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;t&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;-&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#000&#34;&gt;FunctionDeclaration&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;t&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// MCP tool name
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;description&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;t&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;description&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// MCP tool description
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;parametersJsonSchema&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;t&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;inputSchema&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Receive as Object type
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;collect&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Collectors&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;toList&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;tool&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Tool&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Tool&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionDeclarations&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionDeclarations&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;config&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;GenerateContentConfig&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;tools&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;listOf&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;tool&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;client&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Client&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;().&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;apiKey&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;geminiApiKey&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;).&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 4. Process user question
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;userMessage&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Tell me the weather in Seoul&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;========================================&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;User: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$userMessage&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;========================================&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 5. First Gemini API call
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;var&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;client&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;models&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;generateContent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;gemini-3-flash-preview&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000&#34;&gt;userMessage&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000&#34;&gt;config&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;=== First Response ===&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Candidates: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;${response.candidates()}&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 6. Check and process Function Call
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidatesOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;candidates&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;candidatesOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidates&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidatesOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;candidates&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isNotEmpty&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidate&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidates&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;[&lt;/span&gt;&lt;span style=&#34;color:#0000cf;font-weight:bold&#34;&gt;0&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;contentOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;candidate&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;contentOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;contentOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;partsOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;parts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;partsOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;parts&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;partsOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Parts: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$parts&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Check whether there is a Function Call
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCalls&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;parts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;filter&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;p&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Part&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;-&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;p&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;!=&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;null&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;p&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;().&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionCalls&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isNotEmpty&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;=== Function Calls Detected ===&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Build conversation history (user message + model function call)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;contents&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mutableListOf&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;&amp;lt;&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;&amp;gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Add user message
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;contents&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;add&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#000&#34;&gt;Content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;role&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;user&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;parts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;listOf&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Part&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;().&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;text&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;userMessage&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;).&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Add model response (function call)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;contents&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;add&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Process each Function Call
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionResponseParts&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mutableListOf&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;&amp;lt;&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;Part&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;&amp;gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;for&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;part&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;in&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCalls&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCallOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;part&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionCallOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCallOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionNameOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionArgsOpt&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionCall&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;args&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionNameOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionArgsOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isPresent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionName&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionNameOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionArgs&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionArgsOpt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;get&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Function Call: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$functionName&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Arguments: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$functionArgs&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Step 3: Call the tool on the MCP server
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpResult&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;callTool&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;CallToolRequest&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionName&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionArgs&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;MCP Result: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$mcpResult&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Extract content from MCP result
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpContent&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpResult&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;resultText&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;if&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;mcpContent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;isNotEmpty&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;())&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Convert MCP Content to string
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// TextContent has a text field
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;mcpContent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;[&lt;/span&gt;&lt;span style=&#34;color:#0000cf;font-weight:bold&#34;&gt;0&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;when&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                            &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;is&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;io&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;modelcontextprotocol&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;spec&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;McpSchema&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;TextContent&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;-&amp;gt;&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                                &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;text&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                            &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                            &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;else&lt;/span&gt; &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;-&amp;gt;&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;toString&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;else&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;No result&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Extracted Result: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;$resultText&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Create Function Response Part
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;val&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;functionResponsePart&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;Part&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionResponse&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                            &lt;span style=&#34;color:#000&#34;&gt;FunctionResponse&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionName&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;mapOf&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;result&amp;#34;&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;to&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;resultText&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                    &lt;span style=&#34;color:#000&#34;&gt;functionResponseParts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;add&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionResponsePart&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Add function response as user role
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;contents&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;add&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#000&#34;&gt;Content&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;builder&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;role&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;user&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;parts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;functionResponseParts&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;build&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// Step 4: Call Gemini API again with function call results
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;=== Calling Gemini Again with Function Results ===&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;response&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#000&#34;&gt;client&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;models&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;generateContent&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;gemini-3-flash-preview&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#000&#34;&gt;contents&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                            &lt;span style=&#34;color:#000&#34;&gt;config&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 7. Print final response
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;=== Final Response ===&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Assistant: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;${response.text()}&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;else&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                        &lt;span style=&#34;color:#000&#34;&gt;println&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;(&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;\n&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;No function calls detected. Direct response: &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;${response.text()}&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;                &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;            &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;        &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#8f5902;font-style:italic&#34;&gt;// 8. Cleanup: close MCP client
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000&#34;&gt;mcpClient&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;closeGracefully&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Output:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;========================================
Gemini API + MCP Function Calling Demo
========================================

Available Tools = ListToolsResult[tools=[Tool[name=get_weather, title=null, description=Return the weather of a given city., inputSchema=JsonSchema[type=object, properties={city={type=string, description=The city for which to get the weather}}, required=[city], additionalProperties=false, defs=null, definitions=null], outputSchema=null, annotations=null, meta=null]], nextCursor=null, meta=null]

========================================
User: Tell me the weather in Seoul
========================================

=== First Response ===
Candidates: Optional[[Candidate{content=Optional[Content{parts=Optional[[Part{mediaResolution=Optional.empty, codeExecutionResult=Optional.empty, executableCode=Optional.empty, fileData=Optional.empty, functionCall=Optional[FunctionCall{id=Optional.empty, args=Optional[{city=Seoul}], name=Optional[get_weather], partialArgs=Optional.empty, willContinue=Optional.empty}], functionResponse=Optional.empty, inlineData=Optional.empty, text=Optional.empty, thought=Optional.empty, thoughtSignature=Optional[[B@a7f0ab6], videoMetadata=Optional.empty}]], role=Optional[model]}], citationMetadata=Optional.empty, finishMessage=Optional.empty, tokenCount=Optional.empty, finishReason=Optional[STOP], avgLogprobs=Optional.empty, groundingMetadata=Optional.empty, index=Optional[0], logprobsResult=Optional.empty, safetyRatings=Optional.empty, urlContextMetadata=Optional.empty}]]
Parts: [Part{mediaResolution=Optional.empty, codeExecutionResult=Optional.empty, executableCode=Optional.empty, fileData=Optional.empty, functionCall=Optional[FunctionCall{id=Optional.empty, args=Optional[{city=Seoul}], name=Optional[get_weather], partialArgs=Optional.empty, willContinue=Optional.empty}], functionResponse=Optional.empty, inlineData=Optional.empty, text=Optional.empty, thought=Optional.empty, thoughtSignature=Optional[[B@a7f0ab6], videoMetadata=Optional.empty}]

=== Function Calls Detected ===
Function Call: get_weather
Arguments: {city=Seoul}
MCP Result: CallToolResult[content=[TextContent[annotations=null, text=&amp;#34;The weather in Seoul is good.&amp;#34;, meta=null]], isError=false, structuredContent=null, meta=null]
Extracted Result: &amp;#34;The weather in Seoul is good.&amp;#34;

=== Calling Gemini Again with Function Results ===

=== Final Response ===
Assistant: The weather in Seoul is good.
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;references&#34;&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://ai.google.dev/gemini-api/docs/function-calling?hl=ko&amp;amp;example=meeting&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Gemini API | Function calling with the Gemini API&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
      <category>kotlin</category>
      
      <category>Gemini</category>
      
    </item>
    
    <item>
      <title>What Is OpenAI Codex?</title>
      <link>https://www.devkuma.com/en/docs/open-ai/codex/</link>
      <pubDate>Sun, 26 Apr 2026 15:49:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/open-ai/codex/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-codex&#34;&gt;What Is Codex?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Codex&lt;/strong&gt; is an &lt;strong&gt;AI-based software engineering agent&lt;/strong&gt; developed by OpenAI.&lt;br&gt;
It is different from ChatGPT because it is not simply a tool that &amp;ldquo;helps with code,&amp;rdquo; but &lt;strong&gt;AI that actually performs tasks on your behalf&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;one-line-definition&#34;&gt;One-Line Definition&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;An AI developer that performs development work when you delegate it&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;ChatGPT -&amp;gt; A tool that helps you think&lt;/li&gt;
&lt;li&gt;Codex -&amp;gt; An agent that handles the work for you&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This difference is the core point.&lt;/p&gt;
&lt;h2 id=&#34;difference-between-chatgpt-and-codex&#34;&gt;Difference Between ChatGPT and Codex&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Category&lt;/th&gt;
          &lt;th&gt;ChatGPT&lt;/th&gt;
          &lt;th&gt;Codex&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Role&lt;/td&gt;
          &lt;td&gt;Questions, explanations, ideas&lt;/td&gt;
          &lt;td&gt;Performs actual work&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Output&lt;/td&gt;
          &lt;td&gt;Text answers&lt;/td&gt;
          &lt;td&gt;Code + execution results&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Scope&lt;/td&gt;
          &lt;td&gt;Focused on a single response&lt;/td&gt;
          &lt;td&gt;File/project-level work&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Method&lt;/td&gt;
          &lt;td&gt;Conversational&lt;/td&gt;
          &lt;td&gt;Task delegation&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Simply put:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ChatGPT: &amp;ldquo;This is how you can do it&amp;rdquo;&lt;/li&gt;
&lt;li&gt;Codex: &amp;ldquo;I tried it directly (including the code)&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;core-features-of-codex&#34;&gt;Core Features of Codex&lt;/h2&gt;
&lt;h3 id=&#34;performs-real-code-work&#34;&gt;Performs Real Code Work&lt;/h3&gt;
&lt;p&gt;Codex directly performs tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Feature implementation&lt;/li&gt;
&lt;li&gt;Bug fixes&lt;/li&gt;
&lt;li&gt;Refactoring&lt;/li&gt;
&lt;li&gt;Writing test code&lt;/li&gt;
&lt;li&gt;Creating pull requests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Its key feature is that it does not merely generate content, but &lt;strong&gt;completes the work through to the end&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&#34;project-level-understanding&#34;&gt;Project-Level Understanding&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Reads and analyzes the entire codebase&lt;/li&gt;
&lt;li&gt;Understands relationships among modules&lt;/li&gt;
&lt;li&gt;Tracks data flow&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, it works based on the &amp;ldquo;entire service,&amp;rdquo; not just &amp;ldquo;one file.&amp;rdquo;&lt;/p&gt;
&lt;h3 id=&#34;independent-execution-environment-sandbox&#34;&gt;Independent Execution Environment (Sandbox)&lt;/h3&gt;
&lt;p&gt;For each task, Codex repeats steps such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating an independent execution environment&lt;/li&gt;
&lt;li&gt;Modifying code&lt;/li&gt;
&lt;li&gt;Running tests&lt;/li&gt;
&lt;li&gt;Verifying results&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It can also follow a structure that repeatedly runs until tests pass.&lt;/p&gt;
&lt;h3 id=&#34;parallel-work-multitasking&#34;&gt;Parallel Work (Multitasking)&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Performs multiple tasks at the same time&lt;/li&gt;
&lt;li&gt;Continues working in the background&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Developing feature A&lt;/li&gt;
&lt;li&gt;Fixing bug B&lt;/li&gt;
&lt;li&gt;Adding test C&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It can process human development tasks in parallel.&lt;/p&gt;
&lt;h3 id=&#34;automation-agent&#34;&gt;Automation Agent&lt;/h3&gt;
&lt;p&gt;Codex can handle not only simple requests but also tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CI/CD work&lt;/li&gt;
&lt;li&gt;Issue organization&lt;/li&gt;
&lt;li&gt;Log analysis&lt;/li&gt;
&lt;li&gt;Repetitive work processing&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;practical-use-examples&#34;&gt;Practical Use Examples&lt;/h2&gt;
&lt;h3 id=&#34;development-work&#34;&gt;Development Work&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Add caching to this API&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Refactor this code and add tests&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;-&amp;gt; Code modification + test execution + result reporting&lt;/p&gt;
&lt;h3 id=&#34;test-automation&#34;&gt;Test Automation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Configuring a Testcontainers environment&lt;/li&gt;
&lt;li&gt;Generating Kotest/JUnit tests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is especially powerful for Spring-based development like yours.&lt;/p&gt;
&lt;h3 id=&#34;maintenance&#34;&gt;Maintenance&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Analyzing old code&lt;/li&gt;
&lt;li&gt;Replacing deprecated APIs&lt;/li&gt;
&lt;li&gt;Improving performance&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id=&#34;productivity-automation&#34;&gt;Productivity Automation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Document generation&lt;/li&gt;
&lt;li&gt;Release note writing&lt;/li&gt;
&lt;li&gt;Code review assistance&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;advantages-from-a-developers-perspective&#34;&gt;Advantages from a Developer&amp;rsquo;s Perspective&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Maximized development speed&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Eliminates repetitive work&lt;/li&gt;
&lt;li&gt;Automates through actual implementation&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Context retention&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Works based on understanding the whole project&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Maintains focus&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Enables delegation of &amp;ldquo;annoying tasks&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a result, development can continue while maintaining a &lt;strong&gt;flow state&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;limitations-important&#34;&gt;Limitations (Important)&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Not perfect&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;May make poor design decisions&lt;/li&gt;
&lt;li&gt;May generate inefficient code&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Responsibility&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Humans must ultimately verify the result&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Context limits&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Difficult to fully understand organizational policies and business logic&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;recent-trend-important-point&#34;&gt;Recent Trend (Important Point)&lt;/h2&gt;
&lt;p&gt;Recently, Codex has moved beyond a simple coding tool:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Expanding into a &lt;strong&gt;work automation agent&lt;/strong&gt;, not just development&lt;/li&gt;
&lt;li&gt;Being applied to real work in many companies&lt;/li&gt;
&lt;li&gt;Used by millions of developers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The shift is from &amp;ldquo;the era when AI answers&amp;rdquo; to &amp;ldquo;the era when AI works&amp;rdquo; ([Reuters][4]).&lt;/p&gt;
&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Codex is changing the existing development workflow as follows.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Direct implementation -&amp;gt; task delegation&lt;/li&gt;
&lt;li&gt;Single task -&amp;gt; parallel work&lt;/li&gt;
&lt;li&gt;Code writing -&amp;gt; result-centered development&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;key-summary&#34;&gt;Key Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;ChatGPT helps you think, and Codex does the work for you.&lt;/strong&gt;&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>Codex</category>
      
    </item>
    
    <item>
      <title>How to Install and Use OpenAI Codex</title>
      <link>https://www.devkuma.com/en/docs/open-ai/codex/install/</link>
      <pubDate>Sun, 26 Apr 2026 15:49:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/open-ai/codex/install/</guid>
      <description>
        
        
        &lt;p&gt;From a developer&amp;rsquo;s perspective, the important point is that &lt;strong&gt;Codex is not so much an &amp;ldquo;installable program&amp;rdquo; as a tool you connect to and use in several ways&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;In other words, it is not a structure that must always be installed locally. It can be used through many interfaces such as ChatGPT, CLI, IDE, and the web.&lt;/p&gt;
&lt;h2 id=&#34;installing-codex&#34;&gt;Installing Codex&lt;/h2&gt;
&lt;h3 id=&#34;using-it-in-chatgpt-easiest-method&#34;&gt;Using It in ChatGPT (Easiest Method)&lt;/h3&gt;
&lt;p&gt;In fact, most people start without a separate installation.&lt;/p&gt;
&lt;h4 id=&#34;method&#34;&gt;Method&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;Log in to ChatGPT&lt;/li&gt;
&lt;li&gt;Enable the Codex feature (plan required)&lt;/li&gt;
&lt;li&gt;Start using it immediately&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Codex is provided as part of ChatGPT plans.&lt;/p&gt;
&lt;h4 id=&#34;features&#34;&gt;Features&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;No installation&lt;/li&gt;
&lt;li&gt;Immediate use&lt;/li&gt;
&lt;li&gt;Simplest method&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;installing-the-cli-on-mac-recommended-for-developers&#34;&gt;Installing the CLI on Mac (Recommended for Developers)&lt;/h3&gt;
&lt;p&gt;If you want to use Codex from a terminal-based Mac environment, this is the key method.&lt;/p&gt;
&lt;h4 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h4&gt;
&lt;p&gt;First, check the basic environment.&lt;/p&gt;
&lt;p&gt;The following are required:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;macOS (Ventura or later recommended)&lt;/li&gt;
&lt;li&gt;Node.js (18 or later)&lt;/li&gt;
&lt;li&gt;npm or pnpm&lt;/li&gt;
&lt;/ul&gt;
&lt;h6 id=&#34;installing-node-if-not-already-installed&#34;&gt;Installing Node (if not already installed)&lt;/h6&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;brew install node
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Check installation:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;node -v
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;npm -v
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h4 id=&#34;installing-codex-cli&#34;&gt;Installing Codex CLI&lt;/h4&gt;
&lt;p&gt;This is the officially provided CLI installation method.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;npm install -g @openai/codex
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Check installation:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;codex --version
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h4 id=&#34;login-most-important&#34;&gt;Login (Most Important)&lt;/h4&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;codex
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;On first execution, the console screen appears as follows.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Welcome to Codex, OpenAI&lt;span style=&#34;color:#a40000&#34;&gt;&amp;#39;&lt;/span&gt;s command-line coding agent
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Sign in with ChatGPT to use Codex as part of your paid plan
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  or connect an API key &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;for&lt;/span&gt; usage-based billing
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&amp;gt; 1. Sign in with ChatGPT
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;     Usage included with Plus, Pro, Business, and Enterprise plans
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  2. Sign in with Device Code
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;     Sign in from another device with a one-time code
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  3. Provide your own API key
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;     Pay &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;for&lt;/span&gt; what you use
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Press Enter to &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;continue&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Select option 1 to log in with your ChatGPT account, and the browser will open.
&lt;img src=&#34;image.png&#34; alt=&#34;alt text&#34;&gt;&lt;/p&gt;
&lt;p&gt;After logging in, the CLI and your account are connected.&lt;/p&gt;
&lt;p&gt;Next, check the content shown in the console and press the required buttons a few times. Then you are finally ready to use it.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;╭───────────────────────────────────────────────╮
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;│ &amp;gt;_ OpenAI Codex &lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;(&lt;/span&gt;v0.125.0&lt;span style=&#34;color:#ce5c00;font-weight:bold&#34;&gt;)&lt;/span&gt;                    │
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;│                                               │
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;│ model:     gpt-5.5   /model to change         │
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;│ directory: ~/develop/devkuma/devkuma-hugo-www │
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;╰───────────────────────────────────────────────╯
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Tip: GPT-5.5 is now available in Codex. It&lt;span style=&#34;color:#a40000&#34;&gt;&amp;#39;&lt;/span&gt;s our strongest agentic coding model yet, built to reason through large codebases, check
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  assumptions with tools, and keep going &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;until&lt;/span&gt; the work is &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;done&lt;/span&gt;.
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  Learn more: https://openai.com/index/introducing-gpt-5-5/
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;› Use /skills to list available skills
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h4 id=&#34;usage-flow&#34;&gt;Usage Flow&lt;/h4&gt;
&lt;p&gt;To summarize, the usage flow is as follows.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;codex&lt;/code&gt; in the CLI&lt;/li&gt;
&lt;li&gt;Proceed with login using your ChatGPT account&lt;/li&gt;
&lt;li&gt;Run it from the project folder&lt;/li&gt;
&lt;li&gt;Give tasks in natural language&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After this process, the CLI and account are connected.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Add Redis caching to this project&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;ide-extension-vs-code-etc&#34;&gt;IDE Extension (VS Code, etc.)&lt;/h3&gt;
&lt;h4 id=&#34;how-to-use&#34;&gt;How to Use&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;Open VS Code&lt;/li&gt;
&lt;li&gt;Install the Codex extension&lt;/li&gt;
&lt;li&gt;Log in&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 id=&#34;advantages&#34;&gt;Advantages&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Make changes while viewing code&lt;/li&gt;
&lt;li&gt;Automatic refactoring&lt;/li&gt;
&lt;li&gt;Code review support&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;webapp-based-use&#34;&gt;Web/App-Based Use&lt;/h3&gt;
&lt;p&gt;Codex also supports the following methods:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Codex web&lt;/li&gt;
&lt;li&gt;Dedicated Codex app&lt;/li&gt;
&lt;li&gt;GitHub integration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;GitHub connection is especially required on the web.&lt;/p&gt;
&lt;h2 id=&#34;using-codex&#34;&gt;Using Codex&lt;/h2&gt;
&lt;h3 id=&#34;apisdk-method-advanced&#34;&gt;API/SDK Method (Advanced)&lt;/h3&gt;
&lt;p&gt;If you want to use it directly from the backend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use an OpenAI API key&lt;/li&gt;
&lt;li&gt;Call a Codex model&lt;/li&gt;
&lt;li&gt;Build an automation system&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;recommended-installation-and-use-by-developer-level&#34;&gt;Recommended Installation and Use by Developer Level&lt;/h3&gt;
&lt;p&gt;Realistically, it can be divided as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Beginner
&lt;ul&gt;
&lt;li&gt;ChatGPT -&amp;gt; use immediately&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Practical work
&lt;ul&gt;
&lt;li&gt;Use CLI and IDE together&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Advanced
&lt;ul&gt;
&lt;li&gt;API + automation (CI/CD integration)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;h3 id=&#34;important-point-often-confused&#34;&gt;Important Point (Often Confused)&lt;/h3&gt;
&lt;p&gt;&amp;ldquo;Installing Codex = downloading a program&amp;rdquo; is not accurate.
More precisely, it means &amp;ldquo;installing an interface that lets you use Codex.&amp;rdquo;&lt;/p&gt;
&lt;h3 id=&#34;developer-tips&#34;&gt;Developer Tips&lt;/h3&gt;
&lt;p&gt;This combination is efficient:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CLI: Large tasks such as refactoring and feature additions&lt;/li&gt;
&lt;li&gt;ChatGPT: Ideas and design&lt;/li&gt;
&lt;li&gt;IDE: Detailed edits&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The key is to use them together.&lt;/p&gt;
&lt;h3 id=&#34;one-line-summary&#34;&gt;One-Line Summary&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Codex is not a tool you install; it is an AI development agent you connect to and use&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

      </description>
      
      <category>AI</category>
      
      <category>Codex</category>
      
    </item>
    
    <item>
      <title>Google AI</title>
      <link>https://www.devkuma.com/en/docs/ai/google/</link>
      <pubDate>Sun, 26 Apr 2026 11:48:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/google/</guid>
      <description>
        
        
        &lt;p&gt;An introduction to Google AI tools&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>AI Terms</title>
      <link>https://www.devkuma.com/en/docs/ai/term/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/term/</guid>
      <description>
        
        
        &lt;h2 id=&#34;recent-ai-terms-that-can-be-confusing-and-difficult&#34;&gt;Recent AI Terms That Can Be Confusing and Difficult&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;AI (Artificial Intelligence)
&lt;ul&gt;
&lt;li&gt;Technology that enables computers to imitate human intelligence, learn, and solve problems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;ML (Machine Learning)
&lt;ul&gt;
&lt;li&gt;Technology that enables AI to learn automatically from data and make predictions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;DL (Deep Learning)
&lt;ul&gt;
&lt;li&gt;A type of ML that uses artificial neural networks to learn complex patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AX (AI Transformation)
&lt;ul&gt;
&lt;li&gt;Organizational transformation centered on AI, going beyond DX&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AGI (Artificial General Intelligence)
&lt;ul&gt;
&lt;li&gt;Artificial intelligence that is not limited to specific tasks and can solve problems intelligently across multiple fields like a human&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Gen (Generative) AI
&lt;ul&gt;
&lt;li&gt;Artificial intelligence that generates text, images, music, video, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Prompting
&lt;ul&gt;
&lt;li&gt;The method of asking questions so that Gen AI can produce the needed answer&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;LLM (Large Language Model)
&lt;ul&gt;
&lt;li&gt;Technology that understands and generates language-like text based on big data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;FM (Foundation Model)
&lt;ul&gt;
&lt;li&gt;A model pre-trained on large and diverse datasets, used as a base model for language processing, image recognition, audio/video generation, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Hallucination
&lt;ul&gt;
&lt;li&gt;A phenomenon in which AI reaches an incorrect conclusion and generates output or content that does not exist in reality&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Skipping Claude Code Permission Prompts</title>
      <link>https://www.devkuma.com/en/docs/ai/claude/dangerously-skip-permissions/</link>
      <pubDate>Thu, 11 Apr 2024 12:18:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/claude/dangerously-skip-permissions/</guid>
      <description>
        
        
        &lt;p&gt;When using Claude Code, if you do not want it to ask for permission every time it modifies files or runs commands, you can run it in &amp;ldquo;YOLO mode (allow all)&amp;rdquo; with the &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt; flag, or configure detailed permissions through &lt;code&gt;settings.json&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;1-always-run-without-permission-prompts-yolo-mode&#34;&gt;1. Always Run Without Permission Prompts (YOLO Mode)&lt;/h2&gt;
&lt;p&gt;If you run the following command in the terminal, file creation, deletion, and command execution proceed immediately without asking for approval.&lt;/p&gt;
&lt;h3 id=&#34;linuxmacos&#34;&gt;Linux/macOS&lt;/h3&gt;
&lt;p&gt;Add &lt;code&gt;alias claude=&#39;claude --dangerously-skip-permissions&#39;&lt;/code&gt; to &lt;code&gt;.bashrc&lt;/code&gt; or &lt;code&gt;.zshrc&lt;/code&gt; for a permanent setting.&lt;/p&gt;
&lt;h3 id=&#34;windows&#34;&gt;Windows&lt;/h3&gt;
&lt;p&gt;Set a doskey macro with &lt;code&gt;claude=claude --dangerously-skip-permissions $*&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;2-allow-only-specific-tools-settings-file&#34;&gt;2. Allow Only Specific Tools (Settings File)&lt;/h2&gt;
&lt;p&gt;As a safer method than &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;, you can specify allowed tools such as Bash and file read/write in &lt;code&gt;.claude/settings.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{&amp;#34;permissions&amp;#34;: {&amp;#34;allow&amp;#34;: [&amp;#34;Bash(find:*)&amp;#34;]}} 
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;3-auto-approval-mode-auto-mode&#34;&gt;3. Auto Approval Mode (Auto Mode)&lt;/h2&gt;
&lt;p&gt;In recent versions, you can use &amp;ldquo;Auto mode,&amp;rdquo; which operates without approval popups while maintaining certain safeguards. This can be controlled in environment settings.&lt;/p&gt;
&lt;p&gt;Note:&lt;br&gt;
&lt;code&gt;--dangerously-skip-permissions&lt;/code&gt; speeds up work, but because it can also execute dangerous actions without restriction, it is recommended only for automation or experimental environments.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Claude</title>
      <link>https://www.devkuma.com/en/docs/ai/claude/</link>
      <pubDate>Sun, 26 Apr 2026 11:48:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/claude/</guid>
      <description>
        
        
        &lt;p&gt;An introduction to Claude&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>AI Tools</title>
      <link>https://www.devkuma.com/en/docs/ai/tool/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/tool/</guid>
      <description>
        
        
        &lt;p&gt;AI tools&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>OpenAI Skills Explained, Installation, and Usage</title>
      <link>https://www.devkuma.com/en/docs/open-ai/skills/</link>
      <pubDate>Fri, 01 May 2026 10:00:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/open-ai/skills/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-are-skills&#34;&gt;What Are Skills?&lt;/h2&gt;
&lt;p&gt;When using AI tools, there will inevitably be moments when you repeatedly enter the same prompt. Asking for code reviews, generating test code, and analyzing logs are all tasks where explaining the same thing every time is less efficient than it seems.&lt;/p&gt;
&lt;p&gt;The key to solving this problem is &amp;ldquo;Skills.&amp;rdquo; Simply put, a Skill is a feature that turns frequently used work into a single &lt;strong&gt;reusable command&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Once created well, you can get the same result with a short trigger instead of explaining it at length every time.&lt;/p&gt;
&lt;p&gt;In other words, Skills are a format for writing reusable workflows, and they can be used not only in Codex CLI but also in IDE extensions and the Codex app.&lt;/p&gt;
&lt;h3 id=&#34;core-structure&#34;&gt;Core Structure&lt;/h3&gt;
&lt;p&gt;A Skill may look complex, but the structure is simple. The core is the flow of &amp;ldquo;input -&amp;gt; processing -&amp;gt; output.&amp;rdquo;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Input: User request, such as &amp;ldquo;make tests for this code&amp;rdquo;&lt;/li&gt;
&lt;li&gt;Processing: Defined prompt logic&lt;/li&gt;
&lt;li&gt;Output: Result, such as test code or an explanation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important point is that a Skill does not simply save a prompt. It also includes the role, rules, and output format.&lt;/p&gt;
&lt;p&gt;For example, the difference between a simple request and a Skill is as follows.&lt;/p&gt;
&lt;p&gt;General request:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Make test code for this code&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Skill:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Role: Test code expert&lt;/li&gt;
&lt;li&gt;Rules: Use Kotest and consider WebFlux&lt;/li&gt;
&lt;li&gt;Output: Immediately executable code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This one difference can completely change the quality of the result.&lt;/p&gt;
&lt;h2 id=&#34;installing-skills&#34;&gt;Installing Skills&lt;/h2&gt;
&lt;p&gt;Skills can be installed easily with commands or by directly placing files.&lt;/p&gt;
&lt;h3 id=&#34;method-1-install-from-the-official-catalog&#34;&gt;Method 1: Install from the Official Catalog&lt;/h3&gt;
&lt;p&gt;The official OpenAI Codex skill catalog is a folder-structured module that bundles instructions, scripts, and resources so AI agents can repeatedly perform specific tasks.&lt;/p&gt;
&lt;p&gt;You can find the Codex skill catalog in the official GitHub repository, openai/skills.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/openai/skills&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://github.com/openai/skills&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To install a Skill in Codex, run &lt;code&gt;$skill-installer&lt;/code&gt;.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$skill-installer {skill-name}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For example, to install a skill named &lt;code&gt;screenshot&lt;/code&gt;, run:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$skill-installer screenshot
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When executed, files such as the Markdown specification (&lt;code&gt;SKILL.md&lt;/code&gt;) are created under &lt;code&gt;~/.codex/skills/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The following files are created when the &lt;code&gt;screenshot&lt;/code&gt; Skill is installed.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;.codex
└── skills
    └── screenshot
        ├── agents
        │   └── openai.yaml
        ├── assets
        │   ├── screenshot-small.svg
        │   └── screenshot.png
        ├── LICENSE.txt
        ├── scripts
        │   ├── ensure_macos_permissions.sh
        │   ├── macos_display_info.swift
        │   ├── macos_permissions.swift
        │   ├── macos_window_info.swift
        │   ├── take_screenshot.ps1
        │   └── take_screenshot.py
        └── SKILL.md
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;After installation, restart Codex and it can be used immediately.&lt;/p&gt;
&lt;h3 id=&#34;method-2-install-by-entering-a-github-url&#34;&gt;Method 2: Install by Entering a GitHub URL&lt;/h3&gt;
&lt;p&gt;You can also install by entering a GitHub URL directly.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$skill-installer https://github.com/openai/skills/tree/main/skills/.curated/{skill-name}
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;method-3-add-directly-to-the-project-root&#34;&gt;Method 3: Add Directly to the Project Root&lt;/h3&gt;
&lt;p&gt;Create a &lt;code&gt;.agents/skills/&lt;/code&gt; folder at the project root and place the skill folder there.
If this is pushed to Git, all team members can use it immediately.&lt;/p&gt;
&lt;h2 id=&#34;using-skills-in-codex&#34;&gt;Using Skills in Codex&lt;/h2&gt;
&lt;p&gt;Codex can use skills in two ways.&lt;/p&gt;
&lt;h3 id=&#34;explicit-invocation&#34;&gt;Explicit Invocation&lt;/h3&gt;
&lt;p&gt;In the CLI or IDE, you can explicitly call a skill by entering &lt;code&gt;${skill-name}&lt;/code&gt; in the prompt.&lt;/p&gt;
&lt;p&gt;For example, to use the &lt;code&gt;screenshot&lt;/code&gt; skill:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$screenshot https://www.devkuma.com --fullpage
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Alternatively, use the &lt;code&gt;/skills&lt;/code&gt; command to view and select from the list of available skills.&lt;/p&gt;
&lt;h3 id=&#34;implicit-invocation&#34;&gt;Implicit Invocation&lt;/h3&gt;
&lt;p&gt;If you describe the task, Codex automatically finds and runs a suitable skill.&lt;/p&gt;
&lt;p&gt;For example, if you say &amp;ldquo;Apply the comments on this PR,&amp;rdquo; the &lt;code&gt;gh-address-comments&lt;/code&gt; skill is automatically executed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the task matches a skill&amp;rsquo;s &lt;code&gt;description&lt;/code&gt;, Codex automatically selects it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Implicit matching depends on the &lt;code&gt;description&lt;/code&gt; in the &lt;code&gt;skills/SKILL.md&lt;/code&gt; file.&lt;/p&gt;
&lt;h2 id=&#34;basic-skill-file-structure&#34;&gt;Basic Skill File Structure&lt;/h2&gt;
&lt;p&gt;Codex first refers only to each skill&amp;rsquo;s name, description, and file path. It loads the full contents of &lt;code&gt;SKILL.md&lt;/code&gt; only when it decides to use that skill.&lt;/p&gt;
&lt;p&gt;Codex includes a list of available skills in the initial context so it can select the right skill.
This list is limited so it does not take up too much prompt space, roughly 2% of the full context or up to 8,000 characters.
If there are many skills, descriptions are shortened first. If there are too many, some skills are excluded from the list and a warning is shown.&lt;/p&gt;
&lt;p&gt;This limit applies only to the initial skill list. Once Codex selects a specific skill, it reads the full &lt;code&gt;SKILL.md&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;A skill is a directory made up of a &lt;code&gt;SKILL.md&lt;/code&gt; file plus optional scripts and resources.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;my-skill/
├── SKILL.md        # Required: instructions + metadata
├── scripts/        # Optional: executable code
├── references/     # Optional: documents
├── assets/         # Optional: templates, resources
└── agents/
    └── openai.yaml # Optional: UI and dependency settings
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SKILL.md&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Required file containing &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;description&lt;/code&gt;, and the core procedure Codex should follow.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;scripts/&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Optional, used when the same code would otherwise be written repeatedly.&lt;/li&gt;
&lt;li&gt;Good examples include collecting changed files, organizing PR metadata, or running a specific test.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;references/&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Optional place for reference materials.&lt;/li&gt;
&lt;li&gt;Good examples include RLS policy check criteria, a team&amp;rsquo;s API contract, or deployment policy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;assets/&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Optional place for templates or resources used in outputs.&lt;/li&gt;
&lt;li&gt;Examples include review comment templates, PR description templates, and report format files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;creating-a-skill&#34;&gt;Creating a Skill&lt;/h2&gt;
&lt;p&gt;Official skills alone may not be enough. You may need custom skills for your own or your team&amp;rsquo;s workflow.&lt;/p&gt;
&lt;p&gt;To create a custom skill, you need to create the &lt;code&gt;SKILL.md&lt;/code&gt; file described above.&lt;/p&gt;
&lt;p&gt;How to write &lt;code&gt;SKILL.md&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-md&#34; data-lang=&#34;md&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;name: skill-name
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;description: Clearly explain when this skill should and should not run
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;---
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;Write the skill instructions Codex should follow
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The &lt;code&gt;description&lt;/code&gt; is the most important part. Since implicit matching depends on this &lt;code&gt;description&lt;/code&gt;, it must be clear and concise.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Put core use cases near the beginning&lt;/li&gt;
&lt;li&gt;Write trigger keywords clearly&lt;/li&gt;
&lt;li&gt;Make it matchable even if the description is shortened&lt;/li&gt;
&lt;li&gt;Include both Korean and English keywords&lt;/li&gt;
&lt;li&gt;List expressions that users are likely to enter&lt;/li&gt;
&lt;li&gt;Also state what the skill does not do to prevent incorrect matching&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Description example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;description: &amp;#34;Automatically applies PR review comments. Reacts to &amp;#39;리뷰 반영&amp;#39;, &amp;#39;PR 코멘트&amp;#39;, &amp;#39;address comments&amp;#39;, &amp;#39;fix review&amp;#39;. Does not perform the code review itself.&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;create-with-the-basic-generator&#34;&gt;Create with the Basic Generator&lt;/h3&gt;
&lt;p&gt;You can write &lt;code&gt;SKILL.md&lt;/code&gt; directly, but using the basic generator is recommended.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$skill-creator
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The generator asks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What the skill does&lt;/li&gt;
&lt;li&gt;When it should run&lt;/li&gt;
&lt;li&gt;Whether to include scripts; the default is instruction-only&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Codex automatically detects skill changes. If changes are not reflected, restart Codex.&lt;/p&gt;
&lt;h2 id=&#34;skill-storage-locations&#34;&gt;Skill Storage Locations&lt;/h2&gt;
&lt;p&gt;Codex loads Skills from several locations.&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Scope&lt;/th&gt;
          &lt;th&gt;Location&lt;/th&gt;
          &lt;th&gt;Purpose&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;REPO&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;$CWD/.agents/skills&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Skills apply only to the current working directory&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;REPO&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;$CWD/../.agents/skills&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Skills that apply only to the parent directory&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;REPO&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;$REPO_ROOT/.agents/skills&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Shared skills for the whole repository&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;USER&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;$HOME/.agents/skills&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Personal user skills applied to all my projects&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;ADMIN&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;/etc/codex/skills&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;System-wide shared skills&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;SYSTEM&lt;/td&gt;
          &lt;td&gt;Built into Codex&lt;/td&gt;
          &lt;td&gt;Built-in default skills such as &lt;code&gt;$skill-installer&lt;/code&gt; and &lt;code&gt;$skill-creator&lt;/code&gt;&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Codex also supports symlinked skill folders.&lt;/p&gt;
&lt;p&gt;These locations are for local development and exploration. If you want external distribution, it is better to use a plugin.&lt;/p&gt;
&lt;h2 id=&#34;optional-metadata&#34;&gt;Optional Metadata&lt;/h2&gt;
&lt;p&gt;If you add &lt;code&gt;agents/openai.yaml&lt;/code&gt;, you can configure UI and policy settings.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;interface&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;display_name&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Name shown to users&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;short_description&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Short description&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;icon_small&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;./assets/small-logo.svg&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;icon_large&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;./assets/large-logo.png&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;brand_color&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;#3B82F6&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;default_prompt&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Default prompt&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;policy&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;allow_implicit_invocation&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;false&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;dependencies&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;  &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;tools&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;    &lt;/span&gt;- &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;type&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;mcp&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;value&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;openaiDeveloperDocs&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;description&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;OpenAI Docs MCP server&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;transport&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;streamable_http&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;      &lt;/span&gt;&lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;url&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt; &lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;https://developers.openai.com/mcp&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#f8f8f8&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;Default value of &lt;code&gt;allow_implicit_invocation&lt;/code&gt;: &lt;code&gt;true&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;If set to &lt;code&gt;false&lt;/code&gt;, automatic invocation is disabled, while explicit invocation remains possible.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;enabling-and-disabling-skills&#34;&gt;Enabling and Disabling Skills&lt;/h2&gt;
&lt;p&gt;If there is a skill you do not want, you can disable it in &lt;code&gt;~/.codex/config.toml&lt;/code&gt; instead of deleting it.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-toml&#34; data-lang=&#34;toml&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;[[&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;skills&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;.&lt;/span&gt;&lt;span style=&#34;color:#000&#34;&gt;config&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;]]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;path&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;/path/to/skill/SKILL.md&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000&#34;&gt;enabled&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;=&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Restart Codex after changing the setting.&lt;/p&gt;
&lt;h2 id=&#34;distributing-skills-as-plugins&#34;&gt;Distributing Skills as Plugins&lt;/h2&gt;
&lt;p&gt;Local skill folders are suitable for development and testing.&lt;/p&gt;
&lt;p&gt;Packaging as a plugin is useful in cases such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Distributing reusable skills&lt;/li&gt;
&lt;li&gt;Bundling multiple skills together&lt;/li&gt;
&lt;li&gt;Including app integrations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A plugin can include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple skills&lt;/li&gt;
&lt;li&gt;App mappings&lt;/li&gt;
&lt;li&gt;MCP server settings&lt;/li&gt;
&lt;li&gt;UI assets&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;using-claude-skills-in-codex&#34;&gt;Using Claude Skills in Codex&lt;/h2&gt;
&lt;p&gt;Because both Claude and Codex tools follow the same Agent Skills open standard (agentskills.io), they use the same SKILL.md format, and Anthropic&amp;rsquo;s Skills repository can also be used in Codex.&lt;/p&gt;
&lt;p&gt;The method is simple:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the Claude Skills repository into &lt;code&gt;.agents/skills/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add a list-skills script.
&lt;ul&gt;
&lt;li&gt;It reads &lt;code&gt;SKILL.md&lt;/code&gt; files in the skills folder and outputs JSON.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Add an instruction to &lt;code&gt;AGENTS.md&lt;/code&gt;: &amp;ldquo;Run list-skills to check available skills.&amp;rdquo;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This allows skills made for Claude Code to be used in Codex as they are.
However, this is a usage method discovered by the community, and it is not an officially supported interoperability feature from OpenAI or Anthropic.&lt;/p&gt;
&lt;p&gt;Basic skills can be mutually compatible. However, each tool&amp;rsquo;s extensions, such as Claude Code&amp;rsquo;s &lt;code&gt;allowed-tools&lt;/code&gt; and Codex&amp;rsquo;s &lt;code&gt;.system&lt;/code&gt; directory, may not be compatible.&lt;/p&gt;
&lt;h3 id=&#34;claude-code-skills-vs-codex-skills&#34;&gt;Claude Code Skills vs Codex Skills&lt;/h3&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;&lt;/th&gt;
          &lt;th&gt;Claude Code&lt;/th&gt;
          &lt;th&gt;Codex&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Definition file&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;SKILL.md&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;SKILL.md&lt;/code&gt;&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Installation method&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;/plugin install + manual copy&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code&gt;$skill-installer&lt;/code&gt;&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Official catalog&lt;/td&gt;
          &lt;td&gt;Available&lt;/td&gt;
          &lt;td&gt;35+ curated&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Implicit invocation&lt;/td&gt;
          &lt;td&gt;Supported&lt;/td&gt;
          &lt;td&gt;Supported&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Team sharing&lt;/td&gt;
          &lt;td&gt;Git commit&lt;/td&gt;
          &lt;td&gt;Git commit&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Execution environment&lt;/td&gt;
          &lt;td&gt;CLI + IDE extension&lt;/td&gt;
          &lt;td&gt;CLI + app + IDE&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;practical-productivity-uses&#34;&gt;Practical Productivity Uses&lt;/h2&gt;
&lt;p&gt;Skills can be used in Codex CLI, IDE extensions, and the Codex app.&lt;/p&gt;
&lt;p&gt;What matters more than simply creating them is where they are used. Here are a few cases where the effect was significant.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Automated log analysis
&lt;ul&gt;
&lt;li&gt;WebFlux logs are difficult to analyze because of asynchronous flow. If you create a Skill as a &amp;ldquo;log interpretation expert,&amp;rdquo; it can explain complex logs structurally.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Standardized test code
&lt;ul&gt;
&lt;li&gt;Skills can solve the problem of each team having a different test style. If rules are enforced, all code is generated in the same style.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Removing repetitive work
&lt;ul&gt;
&lt;li&gt;API documentation generation&lt;/li&gt;
&lt;li&gt;Error message analysis&lt;/li&gt;
&lt;li&gt;Code refactoring suggestions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If these tasks are turned into Skills, they can be handled almost at an automation level.&lt;/p&gt;
&lt;p&gt;The key is to set a rule: &amp;ldquo;Always create a Skill for tasks you do often.&amp;rdquo;&lt;/p&gt;
&lt;h2 id=&#34;summary-codex-skills-are-a-core-tool-for-developer-productivity&#34;&gt;Summary: Codex Skills Are a Core Tool for Developer Productivity&lt;/h2&gt;
&lt;p&gt;Codex Skills are not just a prompt-saving feature. They are a powerful tool for automating repetitive work and standardizing output quality.&lt;/p&gt;
&lt;p&gt;At first, creating just one is enough. Start with the task you do most often, such as test code generation, log analysis, or code review.&lt;/p&gt;
&lt;p&gt;Once you get used to them, the development flow changes enough to make you wonder why you did not use them earlier.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;faq&#34;&gt;FAQ&lt;/h2&gt;
&lt;h3 id=&#34;q1-when-is-it-good-to-use-codex-skills&#34;&gt;Q1. When is it good to use Codex Skills?&lt;/h3&gt;
&lt;p&gt;If you make the same request more than three times repeatedly, it is better to turn it into a Skill immediately. It is especially effective for tasks such as test code generation, documentation, and log analysis.&lt;/p&gt;
&lt;h3 id=&#34;q2-if-there-are-many-skills-does-management-become-difficult&#34;&gt;Q2. If there are many Skills, does management become difficult?&lt;/h3&gt;
&lt;p&gt;That is why naming and purpose definition are important. Clear names such as &amp;ldquo;test-webflux&amp;rdquo; and &amp;ldquo;log-analyzer&amp;rdquo; make management easier.&lt;/p&gt;
&lt;h3 id=&#34;q3-can-beginners-use-them-right-away&#34;&gt;Q3. Can beginners use them right away?&lt;/h3&gt;
&lt;p&gt;Yes. You only need to clearly define Role, Rules, and Output without complex logic. In fact, Skills can be especially helpful for beginners because they reduce repetitive work.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>Codex</category>
      
    </item>
    
    <item>
      <title>NotebookLM</title>
      <link>https://www.devkuma.com/en/docs/ai/notebook-lm/</link>
      <pubDate>Mon, 23 Feb 2026 08:48:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/notebook-lm/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-notebooklm&#34;&gt;What Is NotebookLM?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;NotebookLM&lt;/strong&gt; is an &lt;strong&gt;AI-based research and note organization tool&lt;/strong&gt; developed by Google.
Based on materials uploaded by the user, such as documents, PDFs, and web links, AI understands the content and performs summarization, organization, and question answering. It is close to a &lt;strong&gt;personalized AI research assistant&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Unlike conventional chatbots that answer based on general knowledge, its core feature is that it &lt;strong&gt;finds evidence and answers only within the materials provided by the user&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://notebooklm.google/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Google NotebookLM&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;main-features&#34;&gt;Main Features&lt;/h2&gt;
&lt;h3 id=&#34;grounded-ai-answers&#34;&gt;Grounded AI Answers&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Generates answers only from uploaded documents&lt;/li&gt;
&lt;li&gt;Provides &lt;strong&gt;sources/citations&lt;/strong&gt; with answers&lt;/li&gt;
&lt;li&gt;Strong for analyzing papers, contracts, and lecture materials&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;automatic-summarization&#34;&gt;Automatic Summarization&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Summarizes long PDFs around key points&lt;/li&gt;
&lt;li&gt;Structures key concepts, arguments, and conclusions&lt;/li&gt;
&lt;li&gt;Can be used to draft blogs and reports&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;questions-and-deeper-analysis&#34;&gt;Questions and Deeper Analysis&lt;/h3&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;What are the three core arguments of this document?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;What assumptions does the author make?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Analyze it critically&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;-&amp;gt; Beyond simple summaries, &lt;strong&gt;critical analysis and comparative analysis are also possible&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id=&#34;audio-overview&#34;&gt;Audio Overview&lt;/h3&gt;
&lt;p&gt;A feature where two AIs explain the document content as if having a conversation
-&amp;gt; Can be listened to like a podcast&lt;/p&gt;
&lt;h2 id=&#34;supported-formats&#34;&gt;Supported Formats&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Google Docs&lt;/li&gt;
&lt;li&gt;PDF&lt;/li&gt;
&lt;li&gt;Website links&lt;/li&gt;
&lt;li&gt;Text files&lt;/li&gt;
&lt;li&gt;Google Slides&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;differences-from-chatgpt&#34;&gt;Differences from ChatGPT&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Category&lt;/th&gt;
          &lt;th&gt;NotebookLM&lt;/th&gt;
          &lt;th&gt;ChatGPT&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Answer basis&lt;/td&gt;
          &lt;td&gt;Focused on uploaded materials&lt;/td&gt;
          &lt;td&gt;Based on general training data&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Source display&lt;/td&gt;
          &lt;td&gt;Provided&lt;/td&gt;
          &lt;td&gt;Not provided by default&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Research analysis&lt;/td&gt;
          &lt;td&gt;Very strong&lt;/td&gt;
          &lt;td&gt;Possible, but general-purpose&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Creative writing&lt;/td&gt;
          &lt;td&gt;Average&lt;/td&gt;
          &lt;td&gt;Very strong&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id=&#34;recommended-for&#34;&gt;Recommended For&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;People who do a lot of paper or research work&lt;/li&gt;
&lt;li&gt;People preparing to write blogs or books&lt;/li&gt;
&lt;li&gt;Office workers who need to analyze contracts or policy documents&lt;/li&gt;
&lt;li&gt;Students who need summaries for exam preparation&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;NotebookLM is not a simple AI chatbot. It is closer to &lt;strong&gt;&amp;ldquo;an AI research partner that reads and understands my materials.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It can be especially powerful when writing a book, such as a Kotest book project, or creating material-based content.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>AI Q&amp;A</title>
      <link>https://www.devkuma.com/en/docs/ai/qna/</link>
      <pubDate>Sat, 30 Aug 2025 18:38:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/qna/</guid>
      <description>
        
        
        &lt;p&gt;This section collects questions and answers about AI.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Ollama</title>
      <link>https://www.devkuma.com/en/docs/ai/ollama/</link>
      <pubDate>Sat, 30 Aug 2025 17:05:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/ollama/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-ollama&#34;&gt;What Is Ollama?&lt;/h2&gt;
&lt;p&gt;Ollama is a tool widely used recently among AI and LLM developers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ollama&lt;/strong&gt; is &lt;strong&gt;an open source platform that makes it easy to run and manage large language models (LLMs) in a local environment&lt;/strong&gt;.&lt;br&gt;
In other words, without using a cloud model such as the OpenAI API, it lets you load and use models such as Llama, Mistral, Gemma, and CodeLlama on &lt;strong&gt;your own PC (Mac/Linux/Windows)&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;main-features-of-ollama&#34;&gt;Main Features of Ollama&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Local execution support&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Models can run even without an internet connection&lt;/li&gt;
&lt;li&gt;Useful for corporate security and personal privacy&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple model deployment&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Run a model with a single command such as &lt;code&gt;ollama run llama3&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;model package management&lt;/strong&gt; like Docker, managed with a configuration file called &lt;code&gt;Modelfile&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Support for multiple models&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Can download and run many models such as Meta LLaMA, Mistral, Gemma, Code Llama, and Phi&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API support&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Opens a local server in REST API format (&lt;code&gt;http://localhost:11434/api/generate&lt;/code&gt;) so other apps can call it&lt;/li&gt;
&lt;li&gt;Can be used like a local OpenAI API server&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GPU optimization&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Supports MPS (Mac) and CUDA (NVIDIA GPU), making it fast&lt;/li&gt;
&lt;li&gt;Can also run on CPU, but more slowly&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;using-ollama&#34;&gt;Using Ollama&lt;/h2&gt;
&lt;h3 id=&#34;installation&#34;&gt;Installation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Packages are provided for macOS, Linux, and Windows
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://ollama.com/download&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://ollama.com/download&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;macOS
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;brew install ollama&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After installation on macOS, running Ollama displays an Ollama icon in the menu bar.&lt;br&gt;
&lt;img src=&#34;https://www.devkuma.com/docs/ai/ollama-macos.png&#34; alt=&#34;Ollama&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;running-a-model&#34;&gt;Running a Model&lt;/h3&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;ollama run llama3
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;On first execution, the model is automatically downloaded and then run. The llama3 model is 4.7 GB.&lt;/p&gt;
&lt;h4 id=&#34;downloading-models&#34;&gt;Downloading Models&lt;/h4&gt;
&lt;p&gt;Search for models on the &lt;a href=&#34;https://www.ollama.com/search&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;official site&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt; and install the model you want to use.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Recommended models
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ollama.com/library/gpt-oss&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://www.ollama.com/library/gpt-oss&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Use this if you want ChatGPT-level chat, analysis, or work support. It is a little slow on a MacBook, but works well.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ollama.com/library/phi4-mini&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://www.ollama.com/library/phi4-mini&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Suitable for simple tasks such as searching for targets to call from an MCP server. Any lightweight model that supports tools is fine. Do not use qwen or deepseek.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;api-call-for-example-curl&#34;&gt;API Call (for example, curl)&lt;/h3&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;curl http://localhost:11434/api/generate -d &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#39;{
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;  &amp;#34;model&amp;#34;: &amp;#34;llama3&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;  &amp;#34;prompt&amp;#34;: &amp;#34;Explain quantum computing in simple terms&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;}&amp;#39;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id=&#34;model-management&#34;&gt;Model Management&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ollama list&lt;/code&gt; -&amp;gt; Check installed models&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ollama pull mistral&lt;/code&gt; -&amp;gt; Download a new model&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ollama create mymodel -f Modelfile&lt;/code&gt; -&amp;gt; Create a custom model&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;comparing-ollama-with-other-llm-execution-frameworks&#34;&gt;Comparing Ollama with Other LLM Execution Frameworks&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Tool&lt;/th&gt;
          &lt;th&gt;Features&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Ollama&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Simplest installation and execution, local API support, model package management&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;LM Studio&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;GUI-based, intuitive model selection and execution&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;vLLM&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Optimized for high-performance server execution, mainly used for large-scale deployments&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Text Generation WebUI&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Runs various models and provides a Web UI&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;OpenAI API&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Cloud-based and can use the latest models, but has cost and privacy issues&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Building a &lt;strong&gt;local AI assistant&lt;/strong&gt; in a development environment&lt;/li&gt;
&lt;li&gt;Building an &lt;strong&gt;internal chatbot&lt;/strong&gt; connected to secure company data&lt;/li&gt;
&lt;li&gt;Building RAG systems by integrating with frameworks such as &lt;strong&gt;LangChain&lt;/strong&gt; and &lt;strong&gt;LlamaIndex&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Prototyping: Experimenting quickly without using OpenAI API costs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Ollama is a platform like &amp;ldquo;Docker for LLMs&amp;rdquo; that makes it easy to run LLMs locally&lt;/strong&gt;. It can be used for many purposes, from personal research to enterprise chatbots.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>The Concept and History of Artificial Intelligence</title>
      <link>https://www.devkuma.com/en/docs/ai/concept-and-history/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/concept-and-history/</guid>
      <description>
        
        
        &lt;h2 id=&#34;definition-of-artificial-intelligence&#34;&gt;Definition of Artificial Intelligence&lt;/h2&gt;
&lt;p&gt;Artificial intelligence refers to technology that enables computers to perform human abilities such as learning, reasoning, and problem solving. It began with simple rule-based automation and has now evolved into machine learning and deep learning technologies that can learn and adapt on their own.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/apple-siri-goolgle-assistant.png&#34; alt=&#34;Siri, Google Assistant&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: Smartphone voice assistants such as Siri and Google Assistant go beyond executing simple commands and increasingly support sophisticated conversations by learning users&amp;rsquo; language patterns.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;history-of-artificial-intelligence&#34;&gt;History of Artificial Intelligence&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;1950s: Alan Turing proposes the &amp;ldquo;Turing Test&amp;rdquo;&lt;/li&gt;
&lt;li&gt;1960s-1970s: Rule-based expert systems emerge&lt;/li&gt;
&lt;li&gt;1980s-1990s: Neural network research resumes and machine learning advances&lt;/li&gt;
&lt;li&gt;2000s onward: Deep learning grows rapidly with the development of big data and GPUs&lt;/li&gt;
&lt;li&gt;Today: AI spreads into many fields, including natural language processing, autonomous driving, and medical diagnosis&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/ai-history.png&#34; alt=&#34;History of artificial intelligence&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Chart 1: Timeline of artificial intelligence history (major technological developments by year)&lt;/li&gt;
&lt;li&gt;Image source: &lt;a href=&#34;https://spri.kr/posts/view/21643?code=industry_trend&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://spri.kr/posts/view/21643?code=industry_trend&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>GitHub Copilot</title>
      <link>https://www.devkuma.com/en/docs/ai/copilot/</link>
      <pubDate>Sat, 30 Aug 2025 14:50:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/copilot/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-copilot&#34;&gt;What Is Copilot?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt; is an &lt;strong&gt;AI-based code completion tool&lt;/strong&gt; jointly developed by GitHub and OpenAI.&lt;/li&gt;
&lt;li&gt;When a developer writes code, it analyzes &lt;strong&gt;comments, function signatures, and context&lt;/strong&gt; to automatically suggest the most appropriate code.&lt;/li&gt;
&lt;li&gt;It is often called an &lt;strong&gt;AI pair programmer&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;main-features&#34;&gt;Main Features&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Automatic code suggestions&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Automatically generates a line, an entire function, or even test code&lt;/li&gt;
&lt;li&gt;Quickly fills in repeated patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comment-based development&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Generates code when natural language comments such as &lt;code&gt;// Write a function that adds two numbers&lt;/code&gt; are entered&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context understanding&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Can make more natural suggestions by referring to the current file and other files in the project&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Support for many languages and frameworks&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Supports a wide range of languages, including Python, JavaScript, TypeScript, Go, Java, C#, and C++&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IDE integration&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Provided as plugins for major development environments such as VS Code, JetBrains IDEs, and Neovim&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reducing repetitive work&lt;/strong&gt;: Writing boilerplate code, CRUD APIs, and test code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Learning new languages&lt;/strong&gt;: Quickly learning unfamiliar language or framework syntax&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Algorithm implementation&lt;/strong&gt;: Writing requirements as comments and generating code automatically&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Refactoring support&lt;/strong&gt;: Suggesting better implementation methods&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;advantages&#34;&gt;Advantages&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Faster development: Greatly reduces routine repetitive coding&lt;/li&gt;
&lt;li&gt;Learning effect: Shows examples of unfamiliar APIs or syntax&lt;/li&gt;
&lt;li&gt;Code consistency: Automates boilerplate according to team rules&lt;/li&gt;
&lt;li&gt;Test writing support: Accelerates the TDD cycle&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;limitations-and-drawbacks&#34;&gt;Limitations and Drawbacks&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Imperfect accuracy&lt;/strong&gt;: It does not always produce correct or optimal code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security risks&lt;/strong&gt;: May include vulnerabilities, such as missing SQL injection defenses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;License issues&lt;/strong&gt;: Because it was trained on public code, some generated code may raise copyright issues&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context limits&lt;/strong&gt;: It has limits in deeply understanding an entire project&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;cost&#34;&gt;Cost&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Paid subscription&lt;/strong&gt; (as of 2025)
&lt;ul&gt;
&lt;li&gt;Individual: about &lt;strong&gt;$10 USD/month&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Business: about &lt;strong&gt;$19 USD/month per user&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Free for students and open source contributors&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;difference-between-copilot-and-vibe-coding&#34;&gt;Difference Between Copilot and Vibe Coding&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Copilot&lt;/strong&gt;: The developer leads, and AI suggests as an assistant tool&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vibe coding&lt;/strong&gt;: AI leads, and the developer provides only goals and feedback as a creative style&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;similar-ai-coding-tools&#34;&gt;Similar AI Coding Tools&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Amazon CodeWhisperer&lt;/strong&gt; (AWS)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tabnine&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cursor IDE (IDE with ChatGPT built in)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replit Ghostwriter&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;GitHub Copilot is an &amp;ldquo;AI pair programmer&amp;rdquo; that provides real-time code suggestions based on comments and code context&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It reduces repetitive, low-productivity work and helps create draft code quickly, but &lt;strong&gt;verification, review, and testing&lt;/strong&gt; are always required.&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Core Technologies of Artificial Intelligence</title>
      <link>https://www.devkuma.com/en/docs/ai/core-technologies/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/core-technologies/</guid>
      <description>
        
        
        &lt;h2 id=&#34;machine-learning&#34;&gt;Machine Learning&lt;/h2&gt;
&lt;p&gt;Machine learning (ML) is &lt;strong&gt;a technology that allows computers to learn patterns from data and make predictions or decisions without being explicitly programmed&lt;/strong&gt;. In other words, it is the ability to discover rules and relationships from data without a person having to write every rule manually. Representative types include supervised learning, unsupervised learning, and reinforcement learning.&lt;/p&gt;
&lt;h3 id=&#34;types-of-machine-learning&#34;&gt;Types of Machine Learning&lt;/h3&gt;
&lt;p&gt;Machine learning is mainly divided into three types according to the &lt;strong&gt;learning method&lt;/strong&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Supervised Learning&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Definition&lt;/strong&gt;: Trains a model by providing input data together with the corresponding correct answers, or labels.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Predict correct answers from input data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Examples&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Email spam classification (spam/normal)&lt;/li&gt;
&lt;li&gt;House price prediction (area, location -&amp;gt; price)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Algorithms&lt;/strong&gt;: Linear regression, logistic regression, decision trees, random forests, support vector machines, and others&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unsupervised Learning&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;: Learns from input data without correct answers to find hidden patterns or structures.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Data clustering and dimensionality reduction&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Examples&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Customer segmentation (clusters based on purchase patterns)&lt;/li&gt;
&lt;li&gt;Anomaly detection (discovering fraudulent transactions)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Algorithms&lt;/strong&gt;: K-means clustering, hierarchical clustering, PCA (principal component analysis), and others&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reinforcement Learning&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Definition&lt;/strong&gt;: Learns an optimal strategy using rewards and penalties that follow actions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Optimize sequential decision making&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Examples&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;AlphaGo playing Go&lt;/li&gt;
&lt;li&gt;Route learning for autonomous vehicles&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Algorithms&lt;/strong&gt;: Q-learning, deep Q-networks (DQN), policy gradient methods, and others&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;how-machine-learning-works&#34;&gt;How Machine Learning Works&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Data collection&lt;/strong&gt;: Gather data for training (for example, images, text, sensor data)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data preprocessing&lt;/strong&gt;: Remove missing values, normalize data, and extract features&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model selection&lt;/strong&gt;: Choose an ML algorithm suited to the problem type&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Training&lt;/strong&gt;: The model learns rules and patterns from data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Evaluation&lt;/strong&gt;: Measure model accuracy with test data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prediction&lt;/strong&gt;: Predict results for new data&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;real-world-examples-of-machine-learning&#34;&gt;Real-World Examples of Machine Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Healthcare&lt;/strong&gt;: Supporting disease diagnosis through MRI and CT image analysis&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Finance&lt;/strong&gt;: Predicting credit scores and detecting fraudulent transactions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;E-commerce&lt;/strong&gt;: Personalized product recommendations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Autonomous driving&lt;/strong&gt;: Recognizing road objects and deciding routes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Natural language processing&lt;/strong&gt;: Machine translation and chatbot dialogue generation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advantages-of-machine-learning&#34;&gt;Advantages of Machine Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Enables data-driven decision making&lt;/li&gt;
&lt;li&gt;Efficient for processing large-scale data and learning complex patterns&lt;/li&gt;
&lt;li&gt;Performance can improve through repeated learning&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;limitations-of-machine-learning&#34;&gt;Limitations of Machine Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data dependence&lt;/strong&gt;: High-quality data is essential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Overfitting&lt;/strong&gt;: The model becomes specialized only for training data and lacks generalization ability&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lack of explainability&lt;/strong&gt;: It can be difficult to understand why a model made a particular decision&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ethical issues&lt;/strong&gt;: Biased data can produce unfair results&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;deep-learning&#34;&gt;Deep Learning&lt;/h2&gt;
&lt;p&gt;Deep learning is &lt;strong&gt;a technology in the field of artificial intelligence that is based on artificial neural networks modeled after the structure of the human brain and learns complex patterns and features in data through multilayer structures&lt;/strong&gt;. Compared with simple machine learning models that learn basic relationships in data, deep learning passes through many layers and learns increasingly abstract features, giving it strength in solving high-dimensional problems. It shows excellent performance in image recognition, speech recognition, natural language processing, and other areas.&lt;/p&gt;
&lt;h3 id=&#34;core-principles-of-deep-learning&#34;&gt;Core Principles of Deep Learning&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Artificial neural network structure&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Input layer: The layer where data first enters&lt;/li&gt;
&lt;li&gt;Hidden layer: The layer that processes input data and extracts features&lt;/li&gt;
&lt;li&gt;Output layer: The layer that outputs the final prediction or classification result&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Learning process&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Forward Propagation&lt;/strong&gt;: Calculates output values from input data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Loss Function&lt;/strong&gt;: Measures the difference between the output value and the actual value&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backpropagation&lt;/strong&gt;: Adjusts weights based on the error&lt;/li&gt;
&lt;li&gt;Gradually improves prediction accuracy through repeated learning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Activation Function&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Converts the output value at each node in the neural network to introduce nonlinearity&lt;/li&gt;
&lt;li&gt;Representative functions: Sigmoid, ReLU, hyperbolic tangent (Tanh)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;features-of-deep-learning&#34;&gt;Features of Deep Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Automatic feature extraction&lt;/strong&gt;: Can learn useful features from data without humans defining them one by one&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multilayer structure&lt;/strong&gt;: More hidden layers make it possible to learn more complex patterns&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Suitable for large-scale data&lt;/strong&gt;: Enables efficient learning through big data and GPU computation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;major-application-areas-of-deep-learning&#34;&gt;Major Application Areas of Deep Learning&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Image recognition&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Road object recognition in autonomous vehicles, medical image diagnosis, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;&lt;strong&gt;Speech recognition&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Voice assistants and real-time interpretation systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Natural language processing&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Machine translation, chatbots, and document summarization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Recommendation systems&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Personalized product recommendations in e-commerce and video recommendation algorithms&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;advantages-of-deep-learning&#34;&gt;Advantages of Deep Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Can achieve high accuracy even with complex, high-dimensional data&lt;/li&gt;
&lt;li&gt;Reduces the burden of data preprocessing through automated feature extraction&lt;/li&gt;
&lt;li&gt;Delivers far better performance than conventional methods in various fields&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;limitations-of-deep-learning&#34;&gt;Limitations of Deep Learning&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Requires large amounts of data for training&lt;/strong&gt;: Performance drops when there is not enough data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lack of explainability&lt;/strong&gt;: The model&amp;rsquo;s decision process is opaque, creating a &amp;ldquo;black box&amp;rdquo; problem&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Computational cost and time&lt;/strong&gt;: Requires GPUs and large-scale computing resources&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Risk of overfitting&lt;/strong&gt;: May become optimized only for training data and lack generalization ability&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;natural-language-processing-nlp&#34;&gt;Natural Language Processing (NLP)&lt;/h2&gt;
&lt;p&gt;Natural language processing (NLP) is &lt;strong&gt;a field of artificial intelligence technology that enables computers to understand, analyze, and generate the language humans use, namely natural language&lt;/strong&gt;. Through NLP, machines can process language data in text and speech form, understand meaning, or generate appropriate responses. It has many applications, including translation, question answering, chatbots, and document summarization, and recent GPT-family models have achieved major results.&lt;/p&gt;
&lt;h3 id=&#34;main-goals-of-natural-language-processing&#34;&gt;Main Goals of Natural Language Processing&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Language Understanding&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand the meaning, context, and intent of input sentences&lt;/li&gt;
&lt;li&gt;Examples: Question answering systems and sentiment analysis&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Language Generation&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Generate natural sentences that people can understand&lt;/li&gt;
&lt;li&gt;Examples: Chatbot conversations, automatic document summarization, and machine translation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;core-technologies-of-natural-language-processing&#34;&gt;Core Technologies of Natural Language Processing&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morphological Analysis&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Splits a sentence into morphemes, the smallest units of meaning&lt;/li&gt;
&lt;li&gt;Example: &amp;ldquo;나는 학교에 간다&amp;rdquo; -&amp;gt; [나/는, 학교/에, 가/ㄴ다]&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Part-of-Speech Tagging&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Attaches parts of speech such as nouns, verbs, and adjectives to each word&lt;/li&gt;
&lt;li&gt;Provides a basis for analyzing sentence structure and meaning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Semantic Analysis&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Converts the meaning of sentences or words so a computer can understand them&lt;/li&gt;
&lt;li&gt;Example: Determining whether &amp;ldquo;bank&amp;rdquo; means a financial institution or the side of a river&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Syntactic Parsing&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Analyzes the grammatical structure of a sentence to identify relationships among subject, object, and verb&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Text Embedding&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Converts words, sentences, and documents into numerical vectors&lt;/li&gt;
&lt;li&gt;Allows machine learning models to process natural language data&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Language Model&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learns context and patterns to predict the next word&lt;/li&gt;
&lt;li&gt;GPT, BERT, and other modern models are representative examples&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;applications-of-natural-language-processing&#34;&gt;Applications of Natural Language Processing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Machine translation&lt;/strong&gt;: Google Translate, DeepL, and others&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chatbots and conversational AI&lt;/strong&gt;: Customer support automation and personal tutor AI&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Text summarization&lt;/strong&gt;: Automatic summarization of news articles, papers, and reports&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sentiment analysis&lt;/strong&gt;: Positive and negative analysis based on social media and review data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Speech recognition and speech synthesis&lt;/strong&gt;: Siri, Alexa, and TTS (Text-to-Speech) systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;advantages-of-natural-language-processing&#34;&gt;Advantages of Natural Language Processing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Enables natural interaction between humans and machines&lt;/li&gt;
&lt;li&gt;Makes it possible to automatically analyze and use massive amounts of text data&lt;/li&gt;
&lt;li&gt;Improves services such as translation, search, and recommendation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;limitations-of-natural-language-processing&#34;&gt;Limitations of Natural Language Processing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Limits in context understanding&lt;/strong&gt;: Difficulty understanding complex contexts or ambiguous expressions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Language and cultural bias&lt;/strong&gt;: Bias can occur when relying on data from specific languages or cultures&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data dependence&lt;/strong&gt;: Accuracy drops without high-quality training data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Need for computing resources&lt;/strong&gt;: Training large language models requires enormous computational cost&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;reinforcement-learning&#34;&gt;Reinforcement Learning&lt;/h2&gt;
&lt;p&gt;Reinforcement learning is a technology that learns optimal action strategies based on rewards and penalties. AlphaGo&amp;rsquo;s victory in Go is a representative example.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/alphago-leesedol.png&#34; alt=&#34;Match between AlphaGo and Lee Sedol&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: The match between AlphaGo and Lee Sedol was an event that showed the practical achievements of reinforcement learning to the world.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Reinforcement learning refers to an artificial intelligence technology in which an agent learns an optimal action strategy through experience while interacting with an environment. The agent observes the current state, chooses one of several possible actions, and receives a reward or penalty for that choice. Through this feedback, the agent gradually improves its policy so that it can obtain the maximum reward over the long term. The core of reinforcement learning is that the agent can discover an optimal strategy on its own through repeated trial and error, even without being explicitly told the correct answer. A representative example is &lt;strong&gt;AlphaGo&lt;/strong&gt;. AlphaGo learned optimal moves in Go by repeating countless Go game simulations and reinforcement learning, and it achieved victory against the human champion Lee Sedol. This demonstrated that reinforcement learning can be effective even in solving complex problems.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>GPT and Generative AI</title>
      <link>https://www.devkuma.com/en/docs/ai/gpt/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/gpt/</guid>
      <description>
        
        
        &lt;h2 id=&#34;concept-of-gpt&#34;&gt;Concept of GPT&lt;/h2&gt;
&lt;p&gt;GPT (Generative Pre-trained Transformer) is an artificial intelligence model that learns massive language patterns through pre-training and can then generate new sentences.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generative&lt;/strong&gt;: Means that it can generate new sentences. It can produce answers to questions or create new text when asked to write.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pre-trained&lt;/strong&gt;: Means a model trained in advance on massive amounts of data. It learns language rules and expressions from various texts such as books, websites, and news.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transformer&lt;/strong&gt;: The name of a model architecture designed to understand and process sentence meaning effectively. This technology was developed by Google.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;how-it-works&#34;&gt;How It Works&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pre-training&lt;/strong&gt;
GPT learns patterns, grammar, meaning, and other aspects of language from massive text data available on the internet. For example, when given the sentence &amp;ldquo;The reason the sky is blue is,&amp;rdquo; it learns that descriptions such as &amp;ldquo;because sunlight is scattered&amp;rdquo; often follow.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Inference&lt;/strong&gt;
After pre-training is complete, GPT receives a question from a user and generates a suitable, natural answer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Context Understanding&lt;/strong&gt;
GPT tries to respond appropriately by referring to previous context. For example, if &amp;ldquo;spring&amp;rdquo; was mentioned earlier in a conversation, it can more easily associate and use topics such as flowers or weather.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;advantages-of-gpt&#34;&gt;Advantages of GPT&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Can perform many language tasks&lt;/strong&gt;
It can perform a range of language processing tasks, including question answering, writing, translation, summarization, and grammar correction.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Generates natural expressions&lt;/strong&gt;
GPT-based chatbots can produce conversational sentences naturally, providing an experience similar to talking with a person.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fast responses&lt;/strong&gt;
It can generate and provide desired information within seconds.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;emergence-of-chatgpt&#34;&gt;Emergence of ChatGPT&lt;/h2&gt;
&lt;p&gt;ChatGPT is a conversational artificial intelligence service based on a &lt;strong&gt;large language model (LLM)&lt;/strong&gt; developed by OpenAI. Based on GPT (Generative Pre-trained Transformer) technology, it conducts natural human-like conversations and supports various language processing tasks such as question answering, document writing, translation, and summarization. Its strengths include natural context understanding and multilingual support.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://chatgpt.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://chatgpt.com/&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;meaning-of-the-name&#34;&gt;Meaning of the Name&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Chat&lt;/strong&gt;: Refers to the conversational interface with users.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GPT&lt;/strong&gt;: Refers to a pre-trained transformer architecture-based model with sentence generation ability.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It implies &amp;ldquo;a conversational service based on GPT technology.&amp;rdquo;&lt;/p&gt;
&lt;h3 id=&#34;features-and-functions&#34;&gt;Features and Functions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Broad usability
&lt;ul&gt;
&lt;li&gt;Supports question answering, sentence writing, translation, summarization, programming (code support), and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Natural conversational ability
&lt;ul&gt;
&lt;li&gt;Can generate natural responses that feel like talking with a person.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Multilingual support&lt;/li&gt;
&lt;li&gt;Can communicate in various languages, including Korean.&lt;/li&gt;
&lt;li&gt;Technical flexibility
&lt;ul&gt;
&lt;li&gt;Can generate output in various formats, including text, code, tables, and JSON, and can also be used in no-code environments.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;main-features&#34;&gt;Main Features&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Can perform many language tasks&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;It can be used broadly for question answering, writing, translation, summarization, grammar correction, code writing, and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Natural conversation generation&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Can produce natural language expressions that feel like human conversation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multilingual support&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Can communicate in many languages, including Korean.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexible output formats&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Can provide results not only as text but also in various formats such as code, tables, and JSON.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;technical-foundation&#34;&gt;Technical Foundation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Pre-training&lt;/strong&gt;: Learns the structure and meaning of language from massive text data such as the web, books, papers, and source code.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transformer architecture&lt;/strong&gt;: A core technology for context understanding and sentence generation that uses the self-attention mechanism to effectively identify relationships among words.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h2&gt;
&lt;p&gt;ChatGPT can be used in many ways and is being applied in practice across multiple industries and fields. Representative use cases include the following.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customer service automation&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Corporate customer support departments use ChatGPT to provide 24-hour online consultation services. This makes it possible to handle basic customer inquiries quickly and helps human agents focus on complex issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Education and learning support&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In education, ChatGPT provides personalized learning suited to each student&amp;rsquo;s pace and level. When a student enters a question, AI provides an easy-to-understand answer and recommends additional learning materials, improving learning efficiency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Programming code assistance&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Programmers can use ChatGPT to assist with coding and debugging. By automating repetitive coding tasks, function template generation, and error review, it improves development speed and productivity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Content generation and summarization&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ChatGPT can perform various content generation tasks such as article summaries, advertising copy, and report writing. It can also summarize large documents and quickly deliver key information, improving information processing efficiency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;limitations-and-challenges&#34;&gt;Limitations and Challenges&lt;/h2&gt;
&lt;p&gt;Although ChatGPT offers clear advantages, it also has the following limitations and challenges.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Possibility of factual errors&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;Because ChatGPT generates answers based on learned data, incorrect information or errors may sometimes be included. Therefore, additional verification is needed when using it for important decision making.&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Limits in reflecting the latest information&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The model may not reflect the latest information or events after the point at which it was trained. In fields that require real-time information, it is appropriate to use it as an auxiliary tool.&lt;/li&gt;
&lt;li&gt;However, recent GPT models may compensate for this by integrating with internet search features.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data use and copyright issues&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The data learned by ChatGPT may include copyrighted materials, and generated content may also be connected to intellectual property issues. Therefore, legal issues must be considered carefully in commercial use.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Prediction-based operation&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;GPT operates by probabilistically predicting the next position of a word. Therefore, it does not have deep understanding or emotions like a human; it is more accurate to view it as &amp;ldquo;predicting&amp;rdquo; patterns rather than truly &amp;ldquo;understanding&amp;rdquo; meaning.&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
    </item>
    
    <item>
      <title>LLM (Large Language Model)</title>
      <link>https://www.devkuma.com/en/docs/ai/llm/</link>
      <pubDate>Sun, 24 Aug 2025 13:14:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/llm/</guid>
      <description>
        
        
        &lt;h2 id=&#34;llm-overview&#34;&gt;LLM Overview&lt;/h2&gt;
&lt;p&gt;LLMs (Large Language Models) are &lt;strong&gt;artificial intelligence models that learn from massive amounts of text data and can understand and generate natural language&lt;/strong&gt;. They mainly use a &lt;strong&gt;deep learning-based transformer architecture&lt;/strong&gt;, so they statistically capture the characteristics of human language and have advanced text generation and processing capabilities.&lt;/p&gt;
&lt;p&gt;LLMs are now a central part of AI and play a very important role in language-based applications and system design.&lt;/p&gt;
&lt;h2 id=&#34;how-llms-work&#34;&gt;How LLMs Work&lt;/h2&gt;
&lt;h3 id=&#34;learning-method-and-transformer-architecture&#34;&gt;Learning Method and Transformer Architecture&lt;/h3&gt;
&lt;p&gt;LLMs perform pre-training through &lt;strong&gt;unsupervised learning&lt;/strong&gt; on hundreds of billions of text examples.&lt;br&gt;
In particular, the &lt;strong&gt;transformer architecture&lt;/strong&gt; understands contextual relationships through self-attention, and because it can process data in parallel compared with earlier recurrent neural networks (RNNs), it is highly efficient for training.&lt;/p&gt;
&lt;h3 id=&#34;parameters-and-embeddings&#34;&gt;Parameters and Embeddings&lt;/h3&gt;
&lt;p&gt;The term &amp;ldquo;large&amp;rdquo; refers to the size of the parameters, which can range from billions to hundreds of billions. These enormous parameters make it possible to capture complex contexts and nuances in language.
In addition, an &amp;ldquo;embedding&amp;rdquo; converts words into multidimensional vectors and numerically represents semantic similarity, helping the model understand context.&lt;/p&gt;
&lt;h2 id=&#34;application-areas&#34;&gt;Application Areas&lt;/h2&gt;
&lt;p&gt;LLMs can be used very flexibly. Representative applications include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generative AI&lt;/strong&gt;: Generates text such as essays, translations, and summaries based on user prompts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Code generation&lt;/strong&gt;: Supports code writing from natural language, as seen in GitHub Copilot, AWS CodeWhisperer, and similar tools&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Text classification and sentiment analysis&lt;/strong&gt;: Customer feedback classification, document clustering, and more&lt;/li&gt;
&lt;li&gt;Others: Knowledge-based question answering (KI-NLP), chatbots, customer service automation, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;types-of-learning-methods&#34;&gt;Types of Learning Methods&lt;/h2&gt;
&lt;p&gt;There are three main ways to use an LLM for a specific purpose:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Zero-shot learning&lt;/strong&gt;: Performs various tasks with general prompts without additional training&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Few-shot learning&lt;/strong&gt;: Improves performance by providing a small number of examples&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fine-tuning&lt;/strong&gt;: Further trains parameters on specific data to enable specialized use&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;importance-and-expected-benefits&#34;&gt;Importance and Expected Benefits&lt;/h2&gt;
&lt;p&gt;Adopting LLMs can bring various benefits to companies and organizations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Work automation&lt;/strong&gt;: Improves productivity by automating language-based tasks such as customer support, document summarization, and content generation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability and flexibility&lt;/strong&gt;: A single model can flexibly handle multiple tasks such as translation, summarization, and question answering&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Encouraging innovation&lt;/strong&gt;: Provides a foundation for many future possibilities, including knowledge extraction, creative assistance, and conversational interfaces&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;limitations-and-considerations&#34;&gt;Limitations and Considerations&lt;/h2&gt;
&lt;p&gt;When using LLMs, the following limitations should also be considered:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;High resource requirements&lt;/strong&gt;: Training and serving models with billions of parameters requires substantial computing resources.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Potential bias and errors&lt;/strong&gt;: Limitations or biases in training data can be reflected in model outputs, requiring continuous improvement in accuracy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Privacy and security concerns&lt;/strong&gt;: Systems must prepare for possible relationships with private or sensitive data.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Item&lt;/th&gt;
          &lt;th&gt;Description&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Definition&lt;/td&gt;
          &lt;td&gt;A massive text-based deep learning model capable of natural language understanding and generation&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;How it works&lt;/td&gt;
          &lt;td&gt;Transformer-based, with self-attention, embeddings, and billions of parameters&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Applications&lt;/td&gt;
          &lt;td&gt;Text generation, code generation, classification, summarization, chatbots, and more&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Learning methods&lt;/td&gt;
          &lt;td&gt;Zero-shot, few-shot, fine-tuning&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Advantages&lt;/td&gt;
          &lt;td&gt;Automation, scalability, and creative use&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Limitations&lt;/td&gt;
          &lt;td&gt;Resource demands, bias and accuracy issues, security risks, and more&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
      <category>LLM</category>
      
    </item>
    
    <item>
      <title>MCP (Model Context Protocol)</title>
      <link>https://www.devkuma.com/en/docs/ai/mcp/</link>
      <pubDate>Sun, 24 Aug 2025 13:14:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/mcp/</guid>
      <description>
        
        
        &lt;h2 id=&#34;mcp-overview&#34;&gt;MCP Overview&lt;/h2&gt;
&lt;p&gt;Model Context Protocol (MCP) is &lt;strong&gt;an open standard protocol that helps AI, especially large language models (LLMs), interact effectively with external data sources and tools&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This protocol is designed so applications can deliver context to LLMs in a consistent way. In short, it is sometimes described with the metaphor of a &lt;strong&gt;USB-C port for AI&lt;/strong&gt;. Just as USB-C connects many devices in a unified format, MCP connects AI models with many resources in a standardized way.&lt;/p&gt;
&lt;p&gt;In other words, it is &lt;strong&gt;a common interface that allows AI models to connect with various external systems&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/mcp-architecture.png&#34; alt=&#34;MCP Architecture&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;main-features&#34;&gt;Main Features&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Standardized interface&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides a common protocol that allows models to access &amp;ldquo;data sources / tools / applications.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pluggable structure&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is not tied to a specific application. Any model that supports MCP can be extended in the same way.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Security and control&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Designed to limit the scope a model can access and allow access only to resources approved by the user.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Developer friendly&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Can be used commonly across several AI models such as OpenAI and Anthropic, so &amp;ldquo;an MCP tool built once can be used anywhere.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;examples&#34;&gt;Examples&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;When a model needs a &amp;ldquo;database query&amp;rdquo;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Model -&amp;gt; MCP -&amp;gt; DB Adapter -&amp;gt; Database
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When a model needs to &amp;ldquo;call a web API&amp;rdquo;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Model -&amp;gt; MCP -&amp;gt; HTTP Adapter -&amp;gt; External REST API
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, MCP can be viewed as a foundational technology that &lt;strong&gt;standardizes the plugin ecosystem for AI&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&#34;background-and-need&#34;&gt;Background and Need&lt;/h3&gt;
&lt;p&gt;AI models are essentially &lt;strong&gt;text-based&lt;/strong&gt; in their input and output. However, real-world use requires many tasks such as DB lookups, API calls, and file input/output. Until now, individual solutions such as &lt;strong&gt;plugins, LangChain, and custom API bridges&lt;/strong&gt; had to be used. MCP packages these into a &lt;strong&gt;standardized protocol&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Previously, for AI applications to interact with external systems, &lt;strong&gt;custom integrations were needed for each model and each tool&lt;/strong&gt;. This greatly increased development and maintenance complexity, which was described as the &lt;strong&gt;M x N problem&lt;/strong&gt;. MCP simplifies this into an &lt;strong&gt;M + N structure&lt;/strong&gt;, helping AI applications connect with various tools in a standardized way.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/mcp-before-after.webp&#34; alt=&#34;MCP Architecture&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Image source: &lt;a href=&#34;https://www.descope.com/learn/post/mcp&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://www.descope.com/learn/post/mcp&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;architecture-and-how-it-works&#34;&gt;Architecture and How It Works&lt;/h2&gt;
&lt;h3 id=&#34;client-server-structure&#34;&gt;Client-Server Structure&lt;/h3&gt;
&lt;p&gt;MCP adopts a &lt;strong&gt;client-server architecture&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;MCP client&lt;/strong&gt; is an AI application, such as Claude Desktop.&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;MCP server&lt;/strong&gt; provides external resources such as file systems, databases, and APIs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;communication-method&#34;&gt;Communication Method&lt;/h3&gt;
&lt;p&gt;MCP exchanges requests and responses based on &lt;strong&gt;JSON-RPC 2.0&lt;/strong&gt;, which &lt;strong&gt;improves interoperability through a standardized message exchange method&lt;/strong&gt;.
It also supports both &lt;strong&gt;local inter-process communication based on stdio&lt;/strong&gt; and &lt;strong&gt;HTTP + SSE (Server-Sent Events)-based&lt;/strong&gt; communication.&lt;/p&gt;
&lt;h3 id=&#34;role-of-the-server&#34;&gt;Role of the Server&lt;/h3&gt;
&lt;p&gt;An MCP server performs the following functions.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Tool Registry&lt;/strong&gt;: Manages the list of available tools and functions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authentication&lt;/strong&gt;: Verifies access permissions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Request Handler&lt;/strong&gt;: Handles client requests&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Response Formatter&lt;/strong&gt;: Processes results into a format the AI model can understand&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;An AI application can request the server&amp;rsquo;s &amp;ldquo;list of available tools&amp;rdquo; and then select and use an appropriate tool based on that list.&lt;/p&gt;
&lt;h2 id=&#34;developer-friendliness-and-extensibility&#34;&gt;Developer Friendliness and Extensibility&lt;/h2&gt;
&lt;p&gt;Anthropic released MCP as an &lt;strong&gt;open source standard&lt;/strong&gt; and provides SDKs for major languages such as &lt;strong&gt;Python, TypeScript, Java, Kotlin, and C#&lt;/strong&gt;. This offers the advantage that &lt;strong&gt;client and server implementations are generally simple&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;effects-and-benefits&#34;&gt;Effects and Benefits&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Clear instructions&lt;/strong&gt;: It is possible to clearly specify what data the LLM should handle&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Removal of ambiguity&lt;/strong&gt;: Multiple information sources can be clearly distinguished and referenced&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Support for specialized processing&lt;/strong&gt;: Dedicated processing for specific data formats is possible&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context examples&lt;/strong&gt;: Various contexts such as file systems, databases, and cloud services can be used together&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Thanks to these benefits, &lt;strong&gt;AI can expand its range of activity and provide more accurate, context-aware responses&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;adoption-timing-and-ecosystem-trends&#34;&gt;Adoption Timing and Ecosystem Trends&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;MCP was &lt;strong&gt;released as open source by Anthropic in November 2024&lt;/strong&gt;, and &lt;strong&gt;adoption by developer communities and major AI tools increased rapidly from early 2025&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;There are also mentions of the release of an official SDK for C#. Many MCP servers are currently operating, and &lt;strong&gt;security architecture and data protection&lt;/strong&gt; have also become important concerns.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;summary-table&#34;&gt;Summary Table&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Item&lt;/th&gt;
          &lt;th&gt;Description&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Definition&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;An open standard protocol that connects AI models with external data and tools&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Resolves the complexity of individual integrations and secures more flexible extensibility&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Structure&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Client-server structure, JSON-RPC, standardized message exchange&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;SDK&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Provides Python, TS, Java, Kotlin, C#, and more&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Benefits&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Greater clarity, extensibility, automation, and security&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Current trend&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Growing rapidly since release, with expanding security and practical use&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;p&gt;If you have more questions, feel free to ask. For example, I can also explain MCP server implementation examples or technical usage flows.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
      <category>MCP</category>
      
    </item>
    
    <item>
      <title>MCP Server</title>
      <link>https://www.devkuma.com/en/docs/ai/mcp-server/</link>
      <pubDate>Sat, 30 Aug 2025 14:55:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/mcp-server/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-an-mcp-server&#34;&gt;What Is an MCP Server?&lt;/h2&gt;
&lt;p&gt;An MCP server &lt;strong&gt;provides a standardized interface that connects an AI model so it can access external resources such as tools, data, and APIs&lt;/strong&gt;.&lt;br&gt;
Simply put, it is &lt;strong&gt;a server that provides a collection of tools AI can use&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;structure&#34;&gt;Structure&lt;/h2&gt;
&lt;p&gt;MCP is broadly divided into &lt;strong&gt;clients&lt;/strong&gt; and &lt;strong&gt;servers&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;MCP client&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;An application attached to an LLM environment, such as an IDE, chat UI, or notebook&lt;/li&gt;
&lt;li&gt;Receives the user&amp;rsquo;s prompt, runs the model, and sends requests to the MCP server when needed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP server&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Manages and provides multiple &lt;strong&gt;resources/tools/functions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Communicates with clients through a standardized protocol based on JSON-RPC&lt;/li&gt;
&lt;li&gt;Examples: DB query server, file system server, API call server&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/mcp-client-server.png&#34; alt=&#34;MCP Client Server&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;what-the-server-provides&#34;&gt;What the Server Provides&lt;/h2&gt;
&lt;p&gt;An MCP server mainly provides four functions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Resources&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Databases, files, documents, API responses, and more&lt;/li&gt;
&lt;li&gt;Examples: &lt;code&gt;resource://db/customers&lt;/code&gt;, &lt;code&gt;resource://filesystem/project/README.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tools&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Callable functions or actions&lt;/li&gt;
&lt;li&gt;Examples: &lt;code&gt;searchCustomer(name)&lt;/code&gt;, &lt;code&gt;sendEmail(to, subject, body)&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prompts&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Provides predefined templates&lt;/li&gt;
&lt;li&gt;Example: &amp;ldquo;Prompt for generating SQL queries&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Events&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Delivers notifications or change events from the server in real time&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;example-flow&#34;&gt;Example Flow&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;A user enters in an IDE: &amp;ldquo;Search for customer &amp;lsquo;Hong Gil-dong&amp;rsquo; in the DB&amp;rdquo;&lt;/li&gt;
&lt;li&gt;The LLM decides it should call the &amp;ldquo;searchCustomer&amp;rdquo; tool through the MCP client&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP client -&amp;gt; MCP server&lt;/strong&gt; sends the request&lt;/li&gt;
&lt;li&gt;The MCP server performs the actual DB lookup and returns the result&lt;/li&gt;
&lt;li&gt;The LLM summarizes the result in user-friendly language and displays it&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;mcp-server-example&#34;&gt;MCP Server Example&lt;/h2&gt;
&lt;p&gt;For example, a file system MCP server might provide:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Resource&lt;/strong&gt;: Files in a project folder, such as &lt;code&gt;/src/main.kt&lt;/code&gt; and &lt;code&gt;/README.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tools&lt;/strong&gt;: File reading, writing, and search functions such as &lt;code&gt;readFile&lt;/code&gt; and &lt;code&gt;writeFile&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prompts&lt;/strong&gt;: Templates such as &amp;ldquo;Refactor this code&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When a server configured this way is connected, AI can directly explore and modify project files.&lt;/p&gt;
&lt;h2 id=&#34;analogy&#34;&gt;Analogy&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;MCP server&lt;/strong&gt; = &amp;ldquo;hotel concierge&amp;rdquo;
&lt;ul&gt;
&lt;li&gt;When a guest (LLM) asks, &amp;ldquo;Recommend tourist attractions,&amp;rdquo; the concierge handles multiple APIs and databases behind the scenes and provides organized information&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP client&lt;/strong&gt; = &amp;ldquo;hotel front desk&amp;rdquo;
&lt;ul&gt;
&lt;li&gt;Talks directly with the guest, receives requests, and forwards them to the concierge&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;An MCP server acts as a backend that provides a standardized interface so an LLM can use external tools and data safely and consistently&lt;/strong&gt;.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>RAG (Retrieval-Augmented Generation)</title>
      <link>https://www.devkuma.com/en/docs/ai/rag/</link>
      <pubDate>Sat, 30 Aug 2025 13:09:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/rag/</guid>
      <description>
        
        
        &lt;h2 id=&#34;rag-retrieval-augmented-generation-concept&#34;&gt;RAG (Retrieval-Augmented Generation) Concept&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RAG = Retrieval + Generation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;An LLM (large language model) does not generate answers only from its internal knowledge. Instead, it retrieves relevant information from external databases such as documents, vector DBs, wikis, and company materials, then generates an answer based on those results.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, it is not simply using &amp;ldquo;what the model knows,&amp;rdquo; but is like a smart assistant that &amp;ldquo;looks things up externally when needed and then answers.&amp;rdquo;&lt;/p&gt;
&lt;h2 id=&#34;why-is-it-needed&#34;&gt;Why Is It Needed?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Overcoming the knowledge limits of LLMs
&lt;ul&gt;
&lt;li&gt;LLMs do not know the latest information after the point when they were trained.&lt;/li&gt;
&lt;li&gt;For example, models like GPT do not know the latest information after their training point.&lt;/li&gt;
&lt;li&gt;With RAG, materials retrieved from a DB or the web can be used.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Reducing hallucinations
&lt;ul&gt;
&lt;li&gt;LLMs sometimes make up things they do not know.&lt;/li&gt;
&lt;li&gt;Using external evidence can increase the reliability of answers.&lt;/li&gt;
&lt;li&gt;Instead of unsupported answers, responses can be based on actual documents or databases.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Using customized knowledge
&lt;ul&gt;
&lt;li&gt;LLMs can use &lt;strong&gt;dedicated data&lt;/strong&gt; such as internal company documents, reports, customer FAQs, papers, and codebases.&lt;/li&gt;
&lt;li&gt;Internal confidential documents can be used without training the model on them.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;how-rag-works&#34;&gt;How RAG Works&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Query input
&lt;ul&gt;
&lt;li&gt;The user enters a question.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Retrieval stage
&lt;ul&gt;
&lt;li&gt;The question is vectorized as an embedding, then related documents are retrieved from a vector database.&lt;/li&gt;
&lt;li&gt;Representative DBs: Pinecone, Weaviate, Milvus, FAISS, and others.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Generation stage
&lt;ul&gt;
&lt;li&gt;The LLM generates an answer by referring to the retrieved documents and delivers it together.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/rag.png&#34; alt=&#34;RAG&#34;&gt;&lt;/p&gt;
&lt;p&gt;In short, it has a &lt;strong&gt;&amp;ldquo;find -&amp;gt; refer -&amp;gt; answer&amp;rdquo;&lt;/strong&gt; structure.&lt;/p&gt;
&lt;h2 id=&#34;example&#34;&gt;Example&lt;/h2&gt;
&lt;p&gt;Suppose a question comes in: &amp;ldquo;What was our company&amp;rsquo;s revenue in 2023?&amp;rdquo;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;LLM alone: &amp;ldquo;Revenue in 2023 was 100 million dollars.&amp;rdquo; (No evidence, may be wrong)&lt;/li&gt;
&lt;li&gt;Using RAG: Search internal financial reports -&amp;gt; retrieve related data -&amp;gt; &amp;ldquo;Our company&amp;rsquo;s revenue in 2023 was 920 billion KRW, an 8% increase from the previous year.&amp;rdquo; (Evidence-based answer)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;understanding-through-an-analogy&#34;&gt;Understanding Through an Analogy&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LLM alone&lt;/strong&gt;: A person with a good memory, but they may not know the latest information.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Using RAG&lt;/strong&gt;: A person with a good memory answers while referring to a &lt;strong&gt;dictionary or search engine&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;comparing-rag-and-fine-tuning&#34;&gt;Comparing RAG and Fine-Tuning&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fine-tuning: Further trains the model itself, &amp;ldquo;internalizing&amp;rdquo; new knowledge&lt;/li&gt;
&lt;li&gt;RAG: Leaves the model as is and retrieves external materials for use&lt;/li&gt;
&lt;/ul&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Method&lt;/th&gt;
          &lt;th&gt;Advantages&lt;/th&gt;
          &lt;th&gt;Disadvantages&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Fine-tuning&lt;/td&gt;
          &lt;td&gt;Fast and natural responses&lt;/td&gt;
          &lt;td&gt;Retraining is required whenever data is updated&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;RAG&lt;/td&gt;
          &lt;td&gt;Can always reflect up-to-date and customized information; quick to build&lt;/td&gt;
          &lt;td&gt;Answer quality depends on retrieval quality&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In practice, RAG is often combined with some fine-tuning when needed.&lt;/p&gt;
&lt;h2 id=&#34;technology-stack-used-to-implement-rag&#34;&gt;Technology Stack Used to Implement RAG&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Embedding models: OpenAI Embeddings, Sentence-BERT, and others&lt;/li&gt;
&lt;li&gt;Vector DBs: Pinecone, Weaviate, Milvus, FAISS&lt;/li&gt;
&lt;li&gt;LLMs: GPT, Claude, LLaMA, Gemini, and others&lt;/li&gt;
&lt;li&gt;Frameworks: LangChain, LlamaIndex, Haystack&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;RAG is a method where an LLM uses a retrieval system together with the model to generate answers that are reliable and reflect up-to-date information.&lt;/li&gt;
&lt;li&gt;In other words, it is a core technology for expanding knowledge and strengthening reliability.&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
      <category>RAG</category>
      
    </item>
    
    <item>
      <title>AI Agent</title>
      <link>https://www.devkuma.com/en/docs/ai/agent/</link>
      <pubDate>Sat, 30 Aug 2025 13:49:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/agent/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-an-ai-agent&#34;&gt;What Is an AI Agent?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Unlike an LLM (large language model) that simply generates answers, an &lt;strong&gt;AI Agent&lt;/strong&gt; is &lt;strong&gt;an intelligent system designed to interact with an environment and achieve a specific goal&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In other words, it is &lt;strong&gt;AI that judges for itself, uses necessary tools, takes action, and improves results&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;core-elements-of-an-ai-agent&#34;&gt;Core Elements of an AI Agent&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Goal&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;The task the agent must perform, such as answering customer questions, writing reports, or modifying code.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Perception&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;The stage of understanding the environment or input, such as user input, sensor data, or API responses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Action&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Measures taken to achieve the goal, such as search, calculation, external API calls, or DB updates.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Feedback Loop&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;The process of evaluating results and modifying the next action when needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;difference-between-an-ai-agent-and-a-simple-llm&#34;&gt;Difference Between an AI Agent and a Simple LLM&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LLM&lt;/strong&gt;: Question -&amp;gt; answer (simple Q&amp;amp;A)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI Agent&lt;/strong&gt;: Question -&amp;gt; plan -&amp;gt; search/use tools -&amp;gt; execute multiple steps -&amp;gt; final answer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An LLM may answer &amp;ldquo;Tell me the weather in Seoul&amp;rdquo; based on past training data.&lt;/li&gt;
&lt;li&gt;An AI Agent can &lt;strong&gt;call a real-time API&lt;/strong&gt; and answer with the current temperature and weather.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;technologies-used-by-ai-agents&#34;&gt;Technologies Used by AI Agents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LLM (Large Language Model)&lt;/strong&gt; -&amp;gt; Natural language understanding and reasoning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt; -&amp;gt; External knowledge retrieval&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool use (Plugins, APIs)&lt;/strong&gt; -&amp;gt; Calculators, browsers, databases, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Planner&lt;/strong&gt; -&amp;gt; Breaks complex tasks into multiple steps and executes them&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Memory&lt;/strong&gt; -&amp;gt; Remembers past conversations or states to perform continuous tasks&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;representative-ai-agent-examples&#34;&gt;Representative AI Agent Examples&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ChatGPT + Tools (OpenAI)&lt;/strong&gt; -&amp;gt; Can run code, browse the web, and analyze data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AutoGPT, BabyAGI&lt;/strong&gt; -&amp;gt; Open source autonomous agents&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;LangChain Agents&lt;/strong&gt; -&amp;gt; Connect multiple tools to automate workflows&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Microsoft Copilot, Google Gemini Agents&lt;/strong&gt; -&amp;gt; AI assistants integrated with productivity tools&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;application-areas-of-ai-agents&#34;&gt;Application Areas of AI Agents&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Work automation&lt;/strong&gt;: Email summarization, schedule management, document generation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customer support&lt;/strong&gt;: FAQ answers and consultation support&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Research and analysis&lt;/strong&gt;: Paper search, data analysis, report writing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Development assistance&lt;/strong&gt;: Code generation, testing, debugging&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Robotics&lt;/strong&gt;: Autonomous driving, drones, smart factories&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;An AI Agent goes beyond an LLM that simply answers questions. It is an intelligent system that sets goals, uses tools, and interacts with its environment&lt;/strong&gt;.&lt;br&gt;
In short, it can be understood as &amp;ldquo;&lt;strong&gt;actionable AI&lt;/strong&gt;.&amp;rdquo;&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Multi-Model</title>
      <link>https://www.devkuma.com/en/docs/ai/multi-model/</link>
      <pubDate>Sat, 30 Aug 2025 13:14:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/multi-model/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-multi-model&#34;&gt;What Is Multi-Model?&lt;/h2&gt;
&lt;p&gt;It refers to &lt;strong&gt;an approach that uses multiple models together in a single AI system&lt;/strong&gt;.&lt;br&gt;
In other words, instead of assigning everything to a single model, it &lt;strong&gt;combines the strengths of each model&lt;/strong&gt; to achieve better performance or more diverse functions.&lt;/p&gt;
&lt;p&gt;For example, it may be a model that can process not only text but also images, audio, and video together.&lt;/p&gt;
&lt;h2 id=&#34;why-is-it-needed&#34;&gt;Why Is It Needed?&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;When one model is not enough&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: When both images and text must be handled&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use of specialized models&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Uses a large general-purpose model together with domain-specific models&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Performance optimization&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Heavy and slow models are used only for core reasoning, while lightweight models handle preprocessing and simple tasks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cost reduction&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Always using a huge model like GPT-4 is expensive, so some tasks are assigned to smaller models and only difficult parts use a larger model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;types-of-multi-model&#34;&gt;Types of Multi-Model&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Different from Multi-Modal&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multi-Model != Multi-Modal&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Multi-Modal&lt;/em&gt;: One model that processes &lt;strong&gt;multiple input forms&lt;/strong&gt;, such as images, text, and speech&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Multi-Model&lt;/em&gt;: A system built by &lt;strong&gt;combining multiple models&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configuration methods&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Parallel (Ensemble)&lt;/strong&gt;: Multiple models produce answers at the same time, and the results are combined to make the final decision&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Examples: Voting, blending, weighted sum&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Serial (Pipeline)&lt;/strong&gt;: The output of one model is passed as the input to another model&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: Image captioning model -&amp;gt; text summarization model -&amp;gt; question answering model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hybrid&lt;/strong&gt;: Selects models depending on the situation, such as with a router model&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Retrieval + generation (RAG)&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Retrieval model (vector search) + generative model (LLM)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Copilot-style tools&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Code assistance: a small model for fast code completion, GPT-4 for sophisticated bug fixes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Autonomous driving&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Video recognition CNN + behavior planning RL model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Healthcare&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Medical knowledge model + general LLM combination&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;multi-model-vs-single-model&#34;&gt;Multi-Model vs Single Model&lt;/h2&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Category&lt;/th&gt;
          &lt;th&gt;Single Model&lt;/th&gt;
          &lt;th&gt;Multi-Model&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Structure&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;One model performs everything&lt;/td&gt;
          &lt;td&gt;Multiple models divide roles&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Simple and easy to manage&lt;/td&gt;
          &lt;td&gt;Higher accuracy, more flexibility, and ability to use the latest technologies&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;General-purpose models have performance limits&lt;/td&gt;
          &lt;td&gt;System is complex and requires coordination&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Multi-Model is a system design approach that combines multiple models and uses each model&amp;rsquo;s strengths to produce better results&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Examples include combining a &amp;ldquo;retrieval model + generative model,&amp;rdquo; &amp;ldquo;small model + large model,&amp;rdquo; or &amp;ldquo;specialized model + general-purpose model.&amp;rdquo;&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
      <category>LLM</category>
      
    </item>
    
    <item>
      <title>Function Calling (Tools)</title>
      <link>https://www.devkuma.com/en/docs/ai/function-calling/</link>
      <pubDate>Sat, 30 Aug 2025 14:50:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/function-calling/</guid>
      <description>
        
        
        &lt;h2 id=&#34;what-is-function-calling-tool-calling&#34;&gt;What Is Function Calling (Tool Calling)?&lt;/h2&gt;
&lt;p&gt;Function Calling is a feature that allows an LLM not only to generate text, but also to &lt;strong&gt;directly call external functions or APIs&lt;/strong&gt;.&lt;br&gt;
For example, the &lt;code&gt;function_call&lt;/code&gt; feature provided by the OpenAI API is a representative case.&lt;/p&gt;
&lt;h3 id=&#34;how-it-works&#34;&gt;How It Works&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;User input&lt;/strong&gt;: &amp;ldquo;Tell me today&amp;rsquo;s weather in Seoul&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;LLM judgment&lt;/strong&gt;: &amp;ldquo;This needs a weather API call&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool call&lt;/strong&gt;: Executes a predefined function, such as &lt;code&gt;getWeather(location: string)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Receive result&lt;/strong&gt;: &lt;code&gt;{ &amp;quot;location&amp;quot;: &amp;quot;Seoul&amp;quot;, &amp;quot;temp&amp;quot;: 28, &amp;quot;condition&amp;quot;: &amp;quot;Sunny&amp;quot; }&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Generate final response&lt;/strong&gt;: &amp;ldquo;Today in Seoul, it is sunny and 28 degrees.&amp;rdquo;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In other words, the LLM plays the role of &lt;strong&gt;converting natural language into function input&lt;/strong&gt;, while the actual calculation or search is handled by external functions or tools.&lt;/p&gt;
&lt;h3 id=&#34;advantages&#34;&gt;Advantages&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Use of real-time data&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;The LLM itself does not know information after its training point. Function calls can connect it to up-to-date APIs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Accurate calculation&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;LLMs are weak at mathematical operations. Calling a calculator API instead returns accurate results.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Work automation&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Send an email&amp;rdquo; -&amp;gt; calls an email API&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Get the customer list from the DB&amp;rdquo; -&amp;gt; executes a DB query&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;difference-between-mcp-and-function-calling&#34;&gt;Difference Between MCP and Function Calling&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Function Calling&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A method where an LLM directly calls individual tools&lt;/li&gt;
&lt;li&gt;Requires call definitions at the API or function level&lt;/li&gt;
&lt;li&gt;Examples: &lt;code&gt;getWeather()&lt;/code&gt;, &lt;code&gt;searchStockPrice()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A protocol that connects multiple tools, such as functions and resources, &lt;strong&gt;in a standardized way&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Abstracts each tool as a &amp;ldquo;provider,&amp;rdquo; allowing the LLM to access them consistently&lt;/li&gt;
&lt;li&gt;A higher-level concept that systematically manages &amp;ldquo;tool calling,&amp;rdquo; including function calls&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;practical-examples&#34;&gt;Practical Examples&lt;/h2&gt;
&lt;h3 id=&#34;1-using-function-calling-alone&#34;&gt;1. Using Function Calling Alone&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;OpenAI Function Call example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;&#34;&gt;&lt;code class=&#34;language-json&#34; data-lang=&#34;json&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;name&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;getWeather&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;description&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;Check current weather&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;parameters&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;type&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;object&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;properties&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;      &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;location&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;{&lt;/span&gt; &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;type&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;string&amp;#34;&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;},&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;    &lt;span style=&#34;color:#204a87;font-weight:bold&#34;&gt;&amp;#34;required&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;:&lt;/span&gt; &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;[&lt;/span&gt;&lt;span style=&#34;color:#4e9a06&#34;&gt;&amp;#34;location&amp;#34;&lt;/span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;  &lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;&lt;span style=&#34;color:#000;font-weight:bold&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The model maps &lt;code&gt;&amp;quot;Seoul&amp;quot;&lt;/code&gt; to &lt;code&gt;location&lt;/code&gt; and runs the API.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;2-mcp-based-function-calling&#34;&gt;2. MCP-Based Function Calling&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;MCP groups functions into a &lt;strong&gt;Resource Registry&lt;/strong&gt; for management&lt;/li&gt;
&lt;li&gt;The LLM does not need to know the &amp;ldquo;weather API&amp;rdquo; directly; MCP provides an abstracted interface such as &lt;code&gt;tools.weather&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Multiple tools are connected through a &lt;strong&gt;standard protocol&lt;/strong&gt;, reducing confusion and duplication in function calls&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;analogy&#34;&gt;Analogy&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Function Calling&lt;/strong&gt; = individual &amp;ldquo;instructions for using a tool&amp;rdquo; (a single tool such as scissors, a hammer, or a screwdriver)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP&lt;/strong&gt; = &amp;ldquo;toolbox/workbench&amp;rdquo; (a system that organizes tools in a standardized way)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Function Calling is a technology for calling individual APIs&lt;/strong&gt;, while &lt;strong&gt;MCP is a higher-level layer that manages and connects multiple functions or tools in a standardized way&lt;/strong&gt;.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Vibe Coding</title>
      <link>https://www.devkuma.com/en/docs/ai/vibe-coding/</link>
      <pubDate>Sat, 30 Aug 2025 17:55:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/vibe-coding/</guid>
      <description>
        
        
        &lt;h2 id=&#34;definition-of-vibe-coding&#34;&gt;Definition of Vibe Coding&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Vibe coding&lt;/strong&gt; is not a formal academic term. It is a phrase recently used in developer communities and means &lt;strong&gt;a coding style where programming is done intuitively and enjoyably, following the flow or vibe&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Especially in the AI era, it emphasizes a development style where, instead of writing every line of code directly, developers &lt;strong&gt;give goals and feedback in natural language and guide an LLM to generate and modify code&lt;/strong&gt;.
&lt;ul&gt;
&lt;li&gt;Humans focus on &amp;ldquo;what to build,&amp;rdquo; while AI handles implementation details.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The term became popular in February 2025 when &lt;strong&gt;Andrej Karpathy&lt;/strong&gt; mentioned the idea of &amp;ldquo;forgetting the code and fully giving in to the vibe.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;conceptual-features&#34;&gt;Conceptual Features&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Immersion and improvisation&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Coding immediately according to ideas or instincts that come to mind, without a fixed design&lt;/li&gt;
&lt;li&gt;A feeling similar to a musical &lt;strong&gt;jam session&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flow-centered&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Freely implements things that &amp;ldquo;look fun&amp;rdquo; or &amp;ldquo;feel like what I want to do now&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enjoyment and flow&lt;/strong&gt; are more important than code perfection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fast prototyping&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Quickly creates and runs a minimum working feature&lt;/li&gt;
&lt;li&gt;The goal is &lt;strong&gt;to make it work first&lt;/strong&gt;, rather than to build a perfect architecture&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;contexts-where-vibe-coding-is-used&#34;&gt;Contexts Where Vibe Coding Is Used&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Early coding learning&lt;/strong&gt; -&amp;gt; Maintains motivation through free experimentation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hackathons/prototype creation&lt;/strong&gt; -&amp;gt; Useful when ideas must be implemented quickly&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creative work&lt;/strong&gt; -&amp;gt; Experimental implementation in music, art, games, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI-assisted coding&lt;/strong&gt; -&amp;gt; Building in flow while conversing with AI such as ChatGPT&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;workflow-typical-steps&#34;&gt;Workflow (Typical Steps)&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Describe the goal&lt;/strong&gt; -&amp;gt; Explain the result, such as &amp;ldquo;process vacation approvals/rejections with a Slack bot&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Generate a draft&lt;/strong&gt; -&amp;gt; The LLM creates the project skeleton&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Run and give feedback&lt;/strong&gt; -&amp;gt; Point out errors or missing features in natural language and repeat modifications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expand features/refactor&lt;/strong&gt; -&amp;gt; Present test criteria and converge toward the target&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The key point is that development proceeds based on &lt;strong&gt;conversational instructions and feedback&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;pros-and-cons&#34;&gt;Pros and Cons&lt;/h2&gt;
&lt;h3 id=&#34;pros&#34;&gt;Pros&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Easy to express creativity and stay immersed&lt;/li&gt;
&lt;li&gt;Fast result checking, making it ideal for prototyping&lt;/li&gt;
&lt;li&gt;Motivates learning and experimentation because it is fun and easy to continue&lt;/li&gt;
&lt;li&gt;Lowers the entry barrier for non-developers because they can build with AI&lt;/li&gt;
&lt;li&gt;Developers move toward a &lt;strong&gt;design and quality management role&lt;/strong&gt; rather than detailed implementation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;cons&#34;&gt;Cons&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Lack of structural completeness and stability&lt;/li&gt;
&lt;li&gt;Risk of bugs and security vulnerabilities&lt;/li&gt;
&lt;li&gt;Code I did not write directly can be difficult to maintain and debug&lt;/li&gt;
&lt;li&gt;Not suitable for complex systems such as multi-file projects or legacy integrations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;difference-from-ai-assisted-coding&#34;&gt;Difference from AI-Assisted Coding&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI-assisted coding (for example, Copilot)&lt;/strong&gt; -&amp;gt; I write code, and AI supplements or recommends&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vibe coding&lt;/strong&gt; -&amp;gt; &lt;strong&gt;AI writes most of the code&lt;/strong&gt;, while the human presents goals, constraints, and tests&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;practical-guardrails&#34;&gt;Practical Guardrails&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Tests first&lt;/strong&gt; -&amp;gt; Define unit, integration, and E2E tests first&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Static analysis/security checks&lt;/strong&gt; -&amp;gt; Automate linters, SAST, and dependency vulnerability checks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Review and records&lt;/strong&gt; -&amp;gt; Document requirements, acceptance criteria, and risks, then approve the final result&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sandbox environment&lt;/strong&gt; -&amp;gt; Use a safe execution space and block sensitive information&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Refactoring before production&lt;/strong&gt; -&amp;gt; Schedule a separate sprint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fix domain knowledge in place&lt;/strong&gt; -&amp;gt; Continuously provide API specs, error cases, and performance criteria&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;suitable-and-unsuitable-cases&#34;&gt;Suitable and Unsuitable Cases&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Suitable&lt;/strong&gt; -&amp;gt; Hackathons, PoCs, personal tools, UI/frontend prototypes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Requires caution&lt;/strong&gt; -&amp;gt; High-trust or regulated environments such as finance, healthcare, and embedded systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;starter-prompt-template&#34;&gt;Starter Prompt Template&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;: What is being built, who uses it, and why&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;: List of required and optional requirements&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Constraints/criteria&lt;/strong&gt;: Security, performance, accessibility, licensing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tech stack&lt;/strong&gt;: Language, framework, DB, deployment method&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Acceptance criteria&lt;/strong&gt;: Test scenarios that must pass&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Working method&lt;/strong&gt;: Small PRs, step-by-step testing, commit rule compliance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Example&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Build a personal shared to-do board with Next.js + SQLite. First propose 5 required features and 3 E2E tests, then implement each feature in a TDD cycle. Security should include OAuth, XSS prevention, and rate limiting.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Vibe coding is a development style that follows inspiration and flow rather than formal design, especially using AI to experiment improvisationally and produce results quickly&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It is powerful for prototypes, hackathons, and personal projects, but security, testing, and refactoring are essential for production-level use.&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Application Areas of Artificial Intelligence</title>
      <link>https://www.devkuma.com/en/docs/ai/applications/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/applications/</guid>
      <description>
        
        
        &lt;h2 id=&#34;healthcare&#34;&gt;Healthcare&lt;/h2&gt;
&lt;p&gt;Medical image analysis, drug development, and personalized treatment for patients&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/eye.png&#34; alt=&#34;Retinal image&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: AI developed by Google&amp;rsquo;s DeepMind is being used to help detect eye diseases early.&lt;/li&gt;
&lt;li&gt;Image source: &lt;a href=&#34;https://www.irobotnews.com/news/articleView.html?idxno=8041&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Robot Newspaper, &amp;ldquo;Google DeepMind Uses Artificial Intelligence for Early Diagnosis of Eye Diseases&amp;rdquo;&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;real-world-examples-of-medical-ai&#34;&gt;Real-World Examples of Medical AI&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Image diagnosis: AI reads X-ray and MRI images to detect early-stage cancer. At one hospital in the United States, the reading error rate decreased by 20%.&lt;/li&gt;
&lt;li&gt;Drug development: Candidate substance discovery, which traditionally took more than 10 years, can be shortened to several months with AI-based methods.&lt;/li&gt;
&lt;li&gt;Personalized treatment: Analyzes a patient&amp;rsquo;s genetic information and recommends the most effective anticancer drug.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&#34;https://www.devkuma.com/docs/ai/new-drug-development.png&#34; alt=&#34;Retinal image&#34;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Figure 3: Comparison of traditional drug development processes and AI-based processes&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;autonomous-driving&#34;&gt;Autonomous Driving&lt;/h2&gt;
&lt;p&gt;Autonomous driving technology aims to recognize road conditions in real time, determine the optimal driving strategy based on them, and manage traffic flow efficiently. It detects the surrounding environment through sensors such as cameras, radar, and LiDAR, and AI algorithms analyze this information to automatically adjust vehicle speed, direction, following distance, and more. This can reduce traffic accidents, improve road efficiency, and strengthen the safety of drivers and pedestrians.&lt;/p&gt;
&lt;h2 id=&#34;finance&#34;&gt;Finance&lt;/h2&gt;
&lt;p&gt;In finance, AI is applied to many areas, including fraudulent transaction detection, automated investment strategies, and risk management. In fraud detection, it analyzes abnormal transaction patterns in real time to identify suspicious transactions. In automated investment strategy, it optimizes portfolios based on market data. AI is also used to help financial institutions predict and manage credit risk, market risk, and other risks, improving both financial stability and operational efficiency.&lt;/p&gt;
&lt;h2 id=&#34;manufacturing&#34;&gt;Manufacturing&lt;/h2&gt;
&lt;p&gt;In manufacturing, AI is applied to smart factories, predictive maintenance, automated quality control, and more. Smart factories connect production equipment and logistics systems with AI, enabling efficient process management. Predictive maintenance analyzes sensor data to predict equipment failures in advance and optimizes maintenance plans, reducing costs and time. Automated quality control detects defects that may occur during production in real time and improves product quality.&lt;/p&gt;
&lt;h2 id=&#34;education&#34;&gt;Education&lt;/h2&gt;
&lt;p&gt;In education, AI is used to provide personalized learning, automated grading, intelligent tutoring systems, and more. It analyzes each student&amp;rsquo;s learning level and pace to recommend personalized learning content, and it automatically grades assignments and exams to reduce teachers&amp;rsquo; workload. Intelligent tutoring systems also analyze students&amp;rsquo; learning patterns, reinforce weak areas, and provide real-time feedback to maximize learning effectiveness.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
    </item>
    
    <item>
      <title>Ethics and Future Prospects</title>
      <link>https://www.devkuma.com/en/docs/ai/ethics-and-future-prospects/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/ethics-and-future-prospects/</guid>
      <description>
        
        
        &lt;h2 id=&#34;ethical-considerations&#34;&gt;Ethical Considerations&lt;/h2&gt;
&lt;p&gt;The development and spread of artificial intelligence technology bring various ethical issues. The main considerations are as follows.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Privacy protection&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AI systems depend on large amounts of data for learning and operation. In this process, sensitive personal information may be collected and processed, and there is a risk of data leakage or misuse. Therefore, when developing and using AI, compliance with privacy laws and secure data management are essential.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Algorithmic bias&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AI models can learn the biases included in training data as they are. This can lead to unfair judgments or discriminatory outcomes against certain groups or individuals and can undermine social trust. To prevent this, it is necessary to review data bias and design algorithms with fairness in mind.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Replacement of human labor&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AI&amp;rsquo;s automation capabilities can increase productivity and efficiency while also replacing the roles of some jobs. This can cause changes in employment structures and social inequality, making policy preparation and retraining programs important.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copyright issues&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Because AI learns from and generates content, it may conflict with someone&amp;rsquo;s content rights. Copying or modifying copyrighted works without the copyright holder&amp;rsquo;s permission may constitute copyright infringement.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;is-ghibli-style-a-copyright-issue&#34;&gt;Is &amp;ldquo;Ghibli Style&amp;rdquo; a Copyright Issue?&lt;/h3&gt;
&lt;p&gt;Specific &lt;strong&gt;expressions&lt;/strong&gt; are protected, but abstract &lt;strong&gt;ideas&lt;/strong&gt; are not.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creative expression of thoughts and emotions
&lt;ul&gt;
&lt;li&gt;Examples: Characters and movie scenes&lt;/li&gt;
&lt;li&gt;Protected by copyright&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Idea techniques and methods
&lt;ul&gt;
&lt;li&gt;Examples: Style and touch&lt;/li&gt;
&lt;li&gt;Not protected by copyright&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then, is it acceptable to post &amp;ldquo;Ghibli style&amp;rdquo; content on the internet?
Even if it is not legally problematic, from an AI ethics perspective it cannot be considered appropriate use.&lt;/p&gt;
&lt;h2 id=&#34;the-future-of-artificial-intelligence&#34;&gt;The Future of Artificial Intelligence&lt;/h2&gt;
&lt;p&gt;Artificial intelligence is expected to evolve beyond a simple tool into a partner that collaborates with humans and supports intelligent decisions. In future society, AI will help human capabilities and maximize efficiency in many fields, including healthcare, education, and industry. However, because technological progress is extremely rapid, if social and legal systems and norms are not developed alongside it, AI can cause various problems such as privacy violations, unfair judgments, and employment insecurity. Therefore, to maximize AI&amp;rsquo;s potential benefits and minimize side effects, ethical, legal, and social preparation is essential together with technological innovation.&lt;/p&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
    </item>
    
    <item>
      <title>Conclusion</title>
      <link>https://www.devkuma.com/en/docs/ai/conclusion/</link>
      <pubDate>Sat, 16 Aug 2025 22:33:00 +0900</pubDate>
      <author>kc@example.com (kc kim)</author>
      <guid>https://www.devkuma.com/en/docs/ai/conclusion/</guid>
      <description>
        
        
        &lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Artificial intelligence has already become an essential technology across our society. Through this book, we hope readers can properly understand the concepts and principles of AI and use it effectively in their own fields.&lt;/p&gt;
&lt;h2 id=&#34;learning-artificial-intelligence&#34;&gt;Learning Artificial Intelligence&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.linkedin.com/learning/what-is-generative-ai/generative-ai-is-a-tool-in-service-of-humanity&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;LinkedIn Learning | Generative AI is a tool in service of humanity&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;A learning course on the basics of generative AI&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.linkedin.com/learning/paths/applying-generative-ai-as-a-creative-professional&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;LinkedIn Learning | Applying Generative AI as a Creative Professional&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;A generative AI course for creators&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.gseek.kr/user/popular/popularTheme/course?p_prgrm_group_sn=18&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Gyeonggi Lifelong Learning Portal | Generative AI Course&lt;i class=&#34;fas fa-external-link-alt&#34;&gt;&lt;/i&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;A generative AI course&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
      
      <category>AI</category>
      
      <category>ChatGPT</category>
      
    </item>
    
  </channel>
</rss>
