<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Text Mode Dev]]></title><description><![CDATA[Hi, I'm Alvaro. I'm a software engineer working in the space industry. In this site I publish articles about programming and software development.]]></description><link>https://textmode.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 20:19:21 GMT</lastBuildDate><atom:link href="https://textmode.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Neovim LSP Setup: nvim-lspconfig + lazy.nvim Explained]]></title><description><![CDATA[Setting up Language Server Protocol (LSP) support in Neovim used to mean juggling long configuration files, custom autocommands, and a fair amount of trial and error. While nvim-lspconfig has always provided the building blocks, managing when and how...]]></description><link>https://textmode.dev/neovim-lsp-setup-nvim-lspconfig-lazynvim-explained</link><guid isPermaLink="true">https://textmode.dev/neovim-lsp-setup-nvim-lspconfig-lazynvim-explained</guid><category><![CDATA[neovim]]></category><category><![CDATA[lsp]]></category><category><![CDATA[lazyvim]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Thu, 15 Jan 2026 13:40:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768484358552/0b36a8e2-88eb-4b37-83dc-5b0d4976981f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Setting up Language Server Protocol (LSP) support in Neovim used to mean juggling long configuration files, custom autocommands, and a fair amount of trial and error. While <code>nvim-lspconfig</code> has always provided the building blocks, managing when and how those pieces load could quickly become messy—especially as your configuration grows.</p>
<p>With the rise of <code>lazy.nvim</code>, Neovim plugin management has shifted toward a more declarative, performance-focused approach. Plugins load only when they’re needed, configurations become easier to reason about, and startup time stays fast even as your setup becomes more powerful. When combined with <code>nvim-lspconfig</code>, this approach results in an LSP configuration that is both clean and flexible.</p>
<p>In this post, we’ll walk through a modern Neovim LSP setup using <code>nvim-lspconfig</code> and <code>lazy.nvim</code>. You’ll see how to structure your configuration, load LSP support at the right moment, and avoid common pitfalls—ending up with a maintainable setup that scales as your editor and workflow evolve.</p>
<p>As we saw in the <a target="_blank" href="https://textmode.dev/lazynvim-installation-and-configuration">previous post</a>, we are working with the following folder structure:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768386955966/e67c5694-d6d1-41de-b922-dc4b89ea1fbe.png" alt class="image--center mx-auto" /></p>
<p>This folder structure reflects a clean and modular approach to configuring Neovim, using <strong>lazy.nvim</strong> as the plugin manager and <strong>nvim-lspconfig</strong> for LSP support. The <code>init.lua</code> file is Neovim’s main entry point and typically only contains minimal bootstrap logic, delegating most of the setup to Lua modules. The <code>lazy-lock.json</code> file is automatically generated by lazy.nvim and ensures reproducible plugin versions by locking dependencies to specific commits. Inside the <code>lua/</code> directory, configuration is split by responsibility: <code>config/lazy.lua</code> is dedicated to initializing and configuring lazy.nvim itself (including plugin loading options), while the <code>plugins/</code> directory contains individual plugin specifications. In this case, <code>plugins/lsp.lua</code> encapsulates all LSP-related configuration, such as server setup.</p>
<p>Let’s start with the configuration for the LSP client in the core config <strong>nvim/lua/config/lazy.lua</strong> file. Depending on the project size and its dependencies, the LSP server can take a noticeable amount of time. In these situations, having visual feedback during initialization is especially helpful, as it lets you know that the LSP is working and gives you an idea of how long it will take to become ready. Neovim’s built-in LSP client supports this by exposing a function that can be integrated into your statusline, allowing you to see the current LSP progress and initialization status at a glance.</p>
<pre><code class="lang-bash">----------------------------------------------------------------------
-- LSP progress
----------------------------------------------------------------------
vim.api.nvim_create_autocmd(<span class="hljs-string">"LspProgress"</span>, {
  group = vim.api.nvim_create_augroup(<span class="hljs-string">"LspProgressStatusline"</span>, { clear = <span class="hljs-literal">true</span> }),
  callback = <span class="hljs-keyword">function</span>()
    vim.cmd(<span class="hljs-string">"redrawstatus"</span>)
  end,
})

vim.opt.statusline:append(<span class="hljs-string">"%{v:lua.vim.lsp.status()}"</span>)
</code></pre>
<p>We can now test our configuration by closing Neovim and opening it again with the file lazy.lua and check how the LSP initialization status gets reported on the status line. Once the loading process finishes, the LSP is ready to show diagnostics, if any.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768385469919/ee3a35a2-c09d-45d9-90ba-ea5d27fc5289.gif" alt class="image--center mx-auto" /></p>
<p>One of the key capabilities of an LSP client is its support for real-time diagnostics, such as warnings, errors, and hints displayed directly in the editor. This effectively acts as a first line of defense when working with compiled languages, surfacing issues as you type rather than waiting for a full build to fail. By highlighting problems inline and providing precise messages, the LSP offers immediate feedback on mistakes, questionable patterns, or potential bugs, allowing you to correct them early and maintain a faster, more confident development flow.</p>
<pre><code class="lang-bash">----------------------------------------------------------------------
-- Diagnostics (INLINE)
----------------------------------------------------------------------
vim.diagnostic.config({
  virtual_text = {
    spacing = 4,
    prefix = <span class="hljs-string">"●"</span>, -- could be <span class="hljs-string">"■"</span>, <span class="hljs-string">"▎"</span>, <span class="hljs-string">""</span>
  },
  signs = <span class="hljs-literal">true</span>,
  underline = <span class="hljs-literal">true</span>,
  update_in_insert = <span class="hljs-literal">false</span>,
  severity_sort = <span class="hljs-literal">true</span>,
})
</code></pre>
<p>This configuration customizes how Neovim displays LSP diagnostics, focusing on clear and non-intrusive inline feedback. The <code>virtual_text</code> option enables inline diagnostic messages and fine-tunes their appearance: <code>spacing = 4</code> adds padding between the code and the message for better readability, while the <code>prefix = "●"</code> inserts a small visual marker to quickly draw attention to problematic lines without overwhelming the code. Enabling <code>signs</code> places icons in the sign column, providing an at-a-glance indication of errors or warnings, and <code>underline = true</code> visually emphasizes the exact range of code associated with the issue. Setting <code>update_in_insert = false</code> prevents diagnostics from updating while you are typing, reducing visual noise and distractions during insertion. Finally, <code>severity_sort = true</code> ensures that when multiple diagnostics exist on the same line, more severe issues (such as errors) take priority over warnings or hints, helping you focus on what matters most first.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768385667933/1eb7641c-9882-4d56-8d8a-621809a7e6f4.gif" alt class="image--center mx-auto" /></p>
<p>Now that the basic LSP setup is in place, we can move on to configuring a language server for a specific language and see the LSP client in action. We’ll do this in the <code>nvim/lua/plugins/lsp.lua</code> file, which contains the specification for the <code>nvim-lspconfig</code> plugin. Each plugin spec is made up of several configurable parts; here, we’ll focus on defining when the plugin should be loaded and setting its core configuration options. We are going to configure the Lua language server. That way, it can help you out from now on to spot any issues on the configuration files for other plugins. And you can explore more easily the APIs provided by Neovim to implement extra functionality.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">return</span> {
  <span class="hljs-string">"neovim/nvim-lspconfig"</span>,
  event = { <span class="hljs-string">"BufReadPre"</span>, <span class="hljs-string">"BufNewFile"</span> },
  config = <span class="hljs-keyword">function</span>()

    ----------------------------------------------------------------------
    -- Lua LSP configuration
    ----------------------------------------------------------------------
    vim.lsp.config(<span class="hljs-string">"lua_ls"</span>, {
      settings = {
        Lua = {
          runtime = { version = <span class="hljs-string">"LuaJIT"</span> },
          diagnostics = {
            globals = { <span class="hljs-string">"vim"</span> },
          },
          workspace = {
            checkThirdParty = <span class="hljs-literal">false</span>,
            library = vim.api.nvim_get_runtime_file(<span class="hljs-string">""</span>, <span class="hljs-literal">true</span>),
          },
          telemetry = { <span class="hljs-built_in">enable</span> = <span class="hljs-literal">false</span> },
        },
      },
    })

    -- Enable Lua LSP
    vim.lsp.enable(<span class="hljs-string">"lua_ls"</span>)
</code></pre>
<p>This plugin specification defines how <strong>nvim-lspconfig</strong> is loaded and how the Lua language server is configured in a lazy.nvim setup. The plugin is lazy-loaded on the <code>BufReadPre</code> and <code>BufNewFile</code> events, meaning it is only initialized when you open an existing file or create a new one, which helps keep Neovim’s startup time fast. Inside the <code>config</code> function, the Lua language server (<code>lua_ls</code>) is configured using <code>vim.lsp.config</code>, with settings tailored specifically for Neovim development. The runtime is set to <code>LuaJIT</code>, matching the Lua version embedded in Neovim, and the diagnostics are instructed to recognize <code>vim</code> as a global variable to avoid false warnings. The workspace configuration disables third-party checks and adds Neovim’s runtime files to the server’s library, enabling better completion and navigation for Neovim APIs. Telemetry is explicitly disabled to avoid sending usage data. Finally, <code>vim.lsp.enable("lua_ls")</code> activates the server, ensuring the configuration is applied and the Lua LSP starts automatically when editing Lua files. You can test its functionality with the default keybinding ‘K’ that shows the hover information of a symbol.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768386562115/b1d7efb8-a3fd-4ac5-b6db-f90e1b33a23f.gif" alt class="image--center mx-auto" /></p>
<p>To ease navigation over autocompletion options, we will define a couple of keybindings. These keybindings enhance the LSP auto-completion workflow by making <code>&lt;Tab&gt;</code> and <code>&lt;S-Tab&gt;</code> context-aware in insert mode. When the completion pop-up menu is visible (<code>pumvisible()</code>), <code>&lt;Tab&gt;</code> and <code>&lt;S-Tab&gt;</code> are repurposed to navigate forward and backward through the list of completion candidates using <code>&lt;C-n&gt;</code> and <code>&lt;C-p&gt;</code>, allowing you to quickly browse suggestions without leaving the home row. When no completion menu is shown, pressing <code>&lt;Tab&gt;</code> explicitly triggers omni-completion (<code>&lt;C-x&gt;&lt;C-o&gt;</code>), which asks the active LSP server for context-aware suggestions such as symbols, methods, or types. This dual behavior keeps completion both discoverable and fluid: you can trigger LSP completion on demand and seamlessly navigate results with familiar keys, reducing friction and making auto-completion feel like a natural extension of typing rather than a separate action.</p>
<pre><code class="lang-bash">-- Omni complete
vim.keymap.set(<span class="hljs-string">"i"</span>, <span class="hljs-string">"&lt;Tab&gt;"</span>, <span class="hljs-keyword">function</span>()
  <span class="hljs-keyword">if</span> vim.fn.pumvisible() == 1 <span class="hljs-keyword">then</span>
    <span class="hljs-built_in">return</span> <span class="hljs-string">"&lt;C-n&gt;"</span>
  end
  <span class="hljs-built_in">return</span> <span class="hljs-string">"&lt;C-x&gt;&lt;C-o&gt;"</span>
end, { expr = <span class="hljs-literal">true</span> })

vim.keymap.set(<span class="hljs-string">"i"</span>, <span class="hljs-string">"&lt;S-Tab&gt;"</span>, <span class="hljs-keyword">function</span>()
  <span class="hljs-keyword">if</span> vim.fn.pumvisible() == 1 <span class="hljs-keyword">then</span>
    <span class="hljs-built_in">return</span> <span class="hljs-string">"&lt;C-p&gt;"</span>
  end
  <span class="hljs-built_in">return</span> <span class="hljs-string">"&lt;S-Tab&gt;"</span>
end, { expr = <span class="hljs-literal">true</span> })
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768386732231/44c232d4-0488-44e7-aed9-684e14dd6c57.gif" alt class="image--center mx-auto" /></p>
<p>Wrapping things up, configuring <strong>nvim-lspconfig</strong> with <strong>lazy.nvim</strong> gives you a clean, modular, and performant way to manage LSP support in Neovim. By letting lazy.nvim handle loading and dependencies, you keep your configuration declarative and easy to reason about, while nvim-lspconfig focuses on what it does best: connecting language servers to your editor. This setup scales naturally as you add more servers, tools, or customizations, and it encourages a workflow where your editor grows with your needs instead of fighting them. With this foundation in place, you’re well-equipped to fine-tune diagnostics, keymaps, and capabilities—and make Neovim feel truly tailored to your development style.</p>
]]></content:encoded></item><item><title><![CDATA[lazy.nvim installation and configuration]]></title><description><![CDATA[For a long time, packer.nvim was the de facto plugin manager for Neovim. It offered a clean Lua-based configuration, fast startup times, and an approachable API that helped many users transition from Vimscript to a more modern Neovim setup. However, ...]]></description><link>https://textmode.dev/lazynvim-installation-and-configuration</link><guid isPermaLink="true">https://textmode.dev/lazynvim-installation-and-configuration</guid><category><![CDATA[neovim]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Thu, 08 Jan 2026 17:24:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767892611252/1f2e2562-2cb2-4bb1-a988-c54edd6a2fdd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, <strong>packer.nvim</strong> was the de facto plugin manager for Neovim. It offered a clean Lua-based configuration, fast startup times, and an approachable API that helped many users transition from Vimscript to a more modern Neovim setup. However, with <strong>packer now discontinued and no longer maintained</strong>, the Neovim ecosystem has naturally moved on in search of a replacement that embraces newer design ideas and performance improvements.</p>
<p>Enter <strong>lazy.nvim</strong>. Designed from the ground up with modern Neovim workflows in mind, <strong>lazy.nvim</strong> goes beyond simply installing plugins. It introduces <strong>automatic lazy loading by default</strong>, event-based loading, and a highly optimized startup path, which often results in faster launch times with less manual configuration. While <strong>packer</strong> required you to explicitly define when and how plugins should be loaded, <strong>lazy.nvim</strong> encourages a declarative style that scales better as your configuration grows. In this post, we’ll walk through how to install and configure <strong>lazy.nvim</strong>.</p>
<p>First things first. You can install Neovim using the following commands for Windows or macOS. For Linux distributions, please refer to the <a target="_blank" href="https://neovim.io/doc2/install/">neovim documentation</a>, where you can find the specific command line for the package manager corresponding to most Linux distributions.</p>
<ul>
<li><p>Windows</p>
<pre><code class="lang-powershell">  winget install Neovim.Neovim
</code></pre>
</li>
</ul>
<ul>
<li><p>macOS</p>
<pre><code class="lang-bash">  brew install neovim
</code></pre>
</li>
</ul>
<p>Once the installation completes successfully, you can start Neovim from your terminal by running the following command:</p>
<pre><code class="lang-bash">nvim
</code></pre>
<p>This will launch Neovim and display its default welcome screen, confirming that the editor is installed correctly and ready to be configured. From here, we can begin customizing Neovim and setting up our development environment:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767631392680/a067b827-0394-4c57-92ea-2d71de1f7153.png" alt class="image--center mx-auto" /></p>
<p>The next step is to create the root <code>nvim</code> directory that will contain your Neovim configuration files. The exact location of this directory depends on the operating system, but on Windows and macOS, it can be found at the following path:</p>
<ul>
<li><p>Windows:</p>
<pre><code class="lang-powershell">  mkdir ~\AppData\Local\nvim
</code></pre>
</li>
<li><p>macOS:</p>
<pre><code class="lang-bash">  mkdir ~/.config/nvim
</code></pre>
</li>
</ul>
<p>There are many ways to structure Neovim configuration files, but a clean and common approach is to use a bootstrap file called <code>init.lua</code>, along with a <code>lua/config</code> directory for core Neovim settings. In addition, a <code>lua/plugins</code> directory can be used to store one specification file per plugin you want to configure:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767892908566/8f7a41ba-050c-40fd-b85b-2f7744fa8ab0.png" alt class="image--center mx-auto" /></p>
<p>The <code>init.lua</code> file serves as a bootstrap that triggers <strong>lazy.nvim</strong>, which in turn will load all the plugins you have configured. To set this up, create the <code>init.lua</code> file and add the following content:</p>
<pre><code class="lang-bash">-- nvim/init.lua
require(<span class="hljs-string">"config.lazy"</span>)
</code></pre>
<p>In a <strong>Neovim (Lua) configuration</strong>, <code>require</code> is how you <strong>load and use Lua modules</strong>. It’s equivalent to “importing” code so you can reuse it across files. In this case, it loads the module that´s located at <strong>nvim/lua/config/lazy.lua</strong>. Therefore, we need to create that file:</p>
<pre><code class="lang-bash">-- Bootstrap lazy.nvim
<span class="hljs-built_in">local</span> lazypath = vim.fn.stdpath(<span class="hljs-string">"data"</span>) .. <span class="hljs-string">"/lazy/lazy.nvim"</span>
<span class="hljs-keyword">if</span> not (vim.uv or vim.loop).fs_stat(lazypath) <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">local</span> lazyrepo = <span class="hljs-string">"https://github.com/folke/lazy.nvim.git"</span>
  <span class="hljs-built_in">local</span> out = vim.fn.system({ <span class="hljs-string">"git"</span>, <span class="hljs-string">"clone"</span>, <span class="hljs-string">"--filter=blob:none"</span>, <span class="hljs-string">"--branch=stable"</span>, lazyrepo, lazypath })
  <span class="hljs-keyword">if</span> vim.v.shell_error ~= 0 <span class="hljs-keyword">then</span>
    vim.api.nvim_echo({
      { <span class="hljs-string">"Failed to clone lazy.nvim:\n"</span>, <span class="hljs-string">"ErrorMsg"</span> },
      { out, <span class="hljs-string">"WarningMsg"</span> },
      { <span class="hljs-string">"\nPress any key to exit..."</span> },
    }, <span class="hljs-literal">true</span>, {})
    vim.fn.getchar()
    os.exit(1)
  end
end
vim.opt.rtp:prepend(lazypath)

-- Setup lazy.nvim
require(<span class="hljs-string">"lazy"</span>).setup({
  spec = {
    -- import your plugins
    { import = <span class="hljs-string">"plugins"</span> },
  },
  -- automatically check <span class="hljs-keyword">for</span> plugin updates
  checker = { enabled = <span class="hljs-literal">true</span> },
})
</code></pre>
<p>This file does <strong>three critical things</strong>:</p>
<ol>
<li><p><strong>Installs lazy.nvim automatically</strong> if it’s missing</p>
</li>
<li><p><strong>Adds it to Neovim’s runtime</strong></p>
</li>
<li><p><strong>Configures it to load plugins from</strong> <code>lua/plugins/</code></p>
</li>
</ol>
<p>We’ve configured the <code>plugins</code> directory as the location where Neovim should look for plugin specifications. Since no plugins are defined yet, starting Neovim at this stage would result in an error complaining about missing specs. To prevent this, we’ll create our first plugin specification. A good candidate is <code>nvim-lspconfig</code>, one of the most important plugins, as it enables support for Neovim’s built-in LSP client. To do this, we’ll add a simple placeholder for <code>nvim-lspconfig</code> in the file <code>nvim/lua/plugins/lsp.lua</code>:</p>
<pre><code class="lang-bash">-- nvim/lua/plugins/lsp.lua placeholder
<span class="hljs-built_in">return</span> {
}
</code></pre>
<p>At this point, we can start Neovim again and run the <code>:Lazy</code> command. This opens the interface of <strong>lazy.nvim</strong>, where you should see the plugin listed, confirming that <strong>lazy.nvim</strong> has been set up correctly and is managing your plugins as expected. From this window, you can also inspect plugin status, trigger installations, and manage updates, giving you a clear overview of your Neovim plugin ecosystem:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767876973681/40225c1e-0df8-448a-81ae-404f5d593f7e.png" alt class="image--center mx-auto" /></p>
<p>And that’s all for the initial setup of <strong>lazy.nvim</strong>. With the plugin manager now in place, we’re ready to move on to configuring actual plugins. In the next post, we’ll take the first step in that direction by setting up <code>nvim-lspconfig</code> and enabling Neovim’s built-in Language Server Protocol support. See you in the next one!</p>
]]></content:encoded></item><item><title><![CDATA[From Zero to Productive: My Nushell Config]]></title><description><![CDATA[Modern shells like zsh and fish have pushed the command line far beyond what traditional bash offers. With features like smart autocompletion, syntax highlighting, plugins, and rich prompts, they already feel like a big upgrade over classic text-base...]]></description><link>https://textmode.dev/from-zero-to-productive-my-nushell-config</link><guid isPermaLink="true">https://textmode.dev/from-zero-to-productive-my-nushell-config</guid><category><![CDATA[nushell]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 14 Dec 2025 12:44:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765716584203/182a9e86-4861-45d7-9d0e-afc6b0ca8740.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Modern shells like <strong>zsh</strong> and <strong>fish</strong> have pushed the command line far beyond what traditional <strong>bash</strong> offers. With features like smart autocompletion, syntax highlighting, plugins, and rich prompts, they already feel like a big upgrade over classic text-based workflows. For many developers, switching from bash to fish or zsh is the first step toward a more productive terminal experience.</p>
<p><strong>Nushell</strong>, however, takes a different—and more radical—approach. Instead of treating everything as plain text flowing through pipes, Nushell works with <strong>structured data</strong>. Commands output tables, records, and lists rather than raw strings, making it possible to filter, sort, and transform data in a way that feels closer to working with a programming language than a traditional shell. This model will feel familiar to anyone who has used <strong>PowerShell</strong>, which pioneered the idea of passing objects instead of text between commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765716799792/6d601378-2896-419a-b650-8e55ad540b90.png" alt class="image--center mx-auto" /></p>
<p>Where Nushell stands apart is in how it applies this idea. PowerShell is deeply tied to the <strong>.NET ecosystem</strong> and primarily centered around Windows, with objects that map directly to .NET types. Nushell, on the other hand, is built from the ground up as a <strong>modern, cross-platform shell</strong>, designed to run consistently on Linux, macOS, and Windows. Its data model is shell-native rather than framework-bound, making it feel lighter, more predictable, and better suited to everyday CLI workflows.</p>
<p>In this post, I’ll walk through the configuration choices that took me <strong>from a fresh Nushell install to a genuinely productive setup</strong>: keybindings, prompts, defaults, and small tweaks that reduce friction and make Nushell feel like home—especially if you’re coming from zsh, fish, or even PowerShell.</p>
<h1 id="heading-installation">Installation</h1>
<p>Installing <strong>Nushell</strong> is straightforward on all major platforms, and the project provides first-class support for <strong>macOS, Windows, and Linux</strong>. You can choose between system package managers, prebuilt binaries, or building from source, depending on how much control you want.</p>
<hr />
<h2 id="heading-macos">macOS</h2>
<p>On macOS, the recommended approach is using <strong>Homebrew</strong>:</p>
<pre><code class="lang-bash">brew install nushell
</code></pre>
<p>This installs Nushell system-wide and keeps it up to date with regular Homebrew upgrades. Once installed, you can start it by running:</p>
<pre><code class="lang-bash">nu
</code></pre>
<p>If you manage multiple shells, you can also add Nushell to your list of available login shells later, but it works perfectly well as a regular interactive shell without replacing your default one.</p>
<hr />
<h2 id="heading-windows">Windows</h2>
<p>On Windows, Nushell integrates well with modern package managers.</p>
<p>Using <strong>winget</strong> (available by default on recent Windows versions):</p>
<pre><code class="lang-powershell">winget install Nushell.Nushell
</code></pre>
<p>Alternatively, if you use <strong>Chocolatey</strong>:</p>
<pre><code class="lang-powershell">choco install nushell
</code></pre>
<p>After installation, you can launch Nushell from <strong>Windows Terminal</strong>, which provides the best experience thanks to proper font rendering, colors, and Unicode support. Nushell runs natively on Windows and does not require WSL.</p>
<hr />
<h2 id="heading-linux">Linux</h2>
<p>On Linux, Nushell is available in most major distributions’ package repositories.</p>
<p>Examples:</p>
<ul>
<li><p><strong>Arch Linux</strong></p>
<pre><code class="lang-bash">  pacman -S nushell
</code></pre>
</li>
<li><p><strong>Ubuntu / Debian</strong></p>
<pre><code class="lang-bash">  apt install nushell
</code></pre>
</li>
<li><p><strong>Fedora</strong></p>
<pre><code class="lang-bash">  dnf install nushell
</code></pre>
</li>
</ul>
<p>If your distribution ships an older version, you can always fall back to installing a prebuilt binary or building from source.</p>
<hr />
<h2 id="heading-cross-platform-alternatives">Cross-platform alternatives</h2>
<p>If you want the same setup everywhere, Nushell also provides <strong>precompiled binaries</strong> for all platforms. You can download the appropriate archive, extract it, and place the <code>nu</code> binary somewhere in your <code>PATH</code>.</p>
<p>Another option is installing via <strong>Cargo</strong>:</p>
<pre><code class="lang-bash">cargo install nu
</code></pre>
<p>This works on all platforms supported by Rust and is useful if you already have a Rust toolchain installed.</p>
<hr />
<p>Regardless of the platform, once <code>nu</code> is available in your <code>PATH</code>, the next step is the same everywhere: start Nushell, generate the initial configuration files, and begin customizing it to fit your workflow.</p>
<h1 id="heading-configuration-files">Configuration files</h1>
<p>Nushell keeps its configuration split into <strong>two main files</strong>, each with a clear and well-defined role. Understanding how to edit them—and when to use each one—is key to building a clean and maintainable setup.</p>
<p>The first file is <a target="_blank" href="http://config.nu"><code>config.nu</code></a>. This is where most of the user-facing behavior of Nushell is defined: keybindings, menus, prompt configuration, environment behavior, and general shell options. If something affects how Nushell <em>feels</em> while you’re using it, it probably belongs here. Changes to <a target="_blank" href="http://config.nu"><code>config.nu</code></a> are applied the next time a Nushell session starts, making it the central place for shaping your interactive experience.</p>
<p>The second file is <a target="_blank" href="http://env.nu"><code>env.nu</code></a>. This file is evaluated very early during shell startup and is specifically meant for <strong>environment variables</strong>. Anything that needs to exist before commands run—such as <code>PATH</code> modifications, <code>EDITOR</code>, locale settings, or tool-specific environment variables—should go here. Keeping environment-related logic in <a target="_blank" href="http://env.nu"><code>env.nu</code></a> avoids subtle issues and makes your configuration easier to reason about.</p>
<p>To edit these files, Nushell provides built-in commands that respect your configured editor. Running <code>config nu</code> opens <a target="_blank" href="http://config.nu"><code>config.nu</code></a>, and <code>config env</code> opens <a target="_blank" href="http://env.nu"><code>env.nu</code></a>. This approach avoids hardcoding paths and works consistently across platforms. Once you save your changes and restart Nushell (or reload the configuration), the updates take effect.</p>
<p>By separating configuration and environment concerns into these two files—and using Nushell’s own commands to edit them—you get a setup that is not only powerful, but also clean, portable, and easy to evolve over time.</p>
<h1 id="heading-initial-configuration">Initial configuration</h1>
<p>When you first start using Nushell, the <strong>initial configuration is intentionally minimal</strong>, but two small changes make an immediate difference to the day-to-day experience.</p>
<p>The first step is setting a <strong>default editor</strong>. Many Nushell commands—such as editing configuration files, opening temporary buffers, or interacting with tools that expect an <code>$EDITOR</code>—rely on this value being defined. Explicitly configuring your editor ensures that Nushell integrates smoothly with the rest of your workflow, whether you prefer a terminal editor like Helix, Neovim, or Vim, or a graphical one. It’s a simple change, but it removes friction right away.</p>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.buffer_editor = <span class="hljs-string">"nvim"</span>
</code></pre>
<p>If Neovim is your editor of choice and you’d like a clear, practical introduction to it, I cover that in detail in my <a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV">book</a>.</p>
<p><a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765790582088/ca3a4ab5-125b-4383-a8ec-3c0b524d23ff.png" alt class="image--center mx-auto" /></a></p>
<p>The second step is <strong>disabling the startup banner</strong>. By default, Nushell displays a welcome message every time a new shell starts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714432794/67f6cd31-97fe-45a5-8507-d1855629742c.png" alt class="image--center mx-auto" /></p>
<p>While helpful at first, it quickly becomes visual noise once you’re using Nushell regularly or opening many terminal sessions. Removing the banner results in a cleaner startup and a more focused terminal, especially when working inside tools like tmux or zellij.</p>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.show_banner = <span class="hljs-literal">false</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714486696/3bf0e4a2-9bc8-41f5-94c5-a95b17738216.png" alt class="image--center mx-auto" /></p>
<p>These two adjustments don’t change how Nushell works, but they set the tone for the rest of the configuration: a shell that starts cleanly, respects your preferred tools, and stays out of your way unless you need it.</p>
<h1 id="heading-prompt">Prompt</h1>
<p>This configuration defines a <strong>deliberately minimal prompt</strong>, with the goal of reducing visual noise and avoiding duplicated information.</p>
<p>By default, Nushell’s prompt includes contextual details such as the current working directory and, on the right side, timing information about the last command. While useful in some scenarios, this can become redundant in a modern terminal setup.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714571251/fcb2c263-abbb-4266-aadf-82f270dfb7df.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-minimal-left-prompt">Minimal left prompt</h3>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.PROMPT_COMMAND = <span class="hljs-string">""</span>
</code></pre>
<p><code>PROMPT_COMMAND</code> controls what Nushell renders as the main (left) prompt. Setting it to an empty string effectively removes all prompt content, leaving a clean, distraction-free command line.</p>
<p>The rationale here is that most modern terminal emulators—such as <strong>Alacritty</strong> or <strong>Windows Terminal</strong>—already display the current working directory in the <strong>window title or tab name</strong>. Repeating the same path information in the prompt adds no new value and consumes horizontal space, especially in deeply nested directories.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714633422/105fe833-3420-4be1-b0fb-5a9c5814ade4.png" alt class="image--center mx-auto" /></p>
<p>By removing it, the focus stays entirely on the command being typed.</p>
<hr />
<h3 id="heading-disabling-the-right-prompt">Disabling the right prompt</h3>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.PROMPT_COMMAND_RIGHT = <span class="hljs-string">""</span>
</code></pre>
<p>The right prompt is commonly used to show <strong>execution time</strong>, timestamps, or status information for the last command. While this can be helpful when profiling performance or debugging slow commands, it is rarely useful during normal, day-to-day shell usage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714663210/3c8e140e-d03a-461f-ba12-e7ea8498a79f.png" alt class="image--center mx-auto" /></p>
<p>Unless you are actively measuring how long commands take to run, this timing information becomes background noise—something your eyes learn to ignore. Disabling the right prompt removes that unnecessary cognitive load and results in a calmer, more predictable terminal layout.</p>
<h1 id="heading-history">History</h1>
<p>This history configuration is designed to make Nushell’s command history <strong>reliable, searchable, and safe across multiple sessions</strong>, while keeping performance predictable.</p>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.history = {
  file_format: sqlite
  max_size: 5_000_000
  sync_on_enter: <span class="hljs-literal">true</span>
  isolation: <span class="hljs-literal">true</span>
}
</code></pre>
<p>Each option plays a specific role:</p>
<hr />
<h3 id="heading-fileformat-sqlite"><code>file_format: sqlite</code></h3>
<p>Using <strong>SQLite</strong> instead of a plain text file gives Nushell a structured, indexed history store. This has several advantages:</p>
<ul>
<li><p>Faster and more reliable history searches, even with large histories</p>
</li>
<li><p>Reduced risk of corruption compared to appending to a text file</p>
</li>
<li><p>Better handling of concurrent access when multiple shells are open</p>
</li>
</ul>
<p>In practice, this makes history feel more like a database than a log file—something you can query efficiently rather than just scroll through.</p>
<hr />
<h3 id="heading-maxsize-5000000"><code>max_size: 5_000_000</code></h3>
<p>This sets a <strong>hard limit</strong> on the history database size (in bytes). Capping the size prevents unbounded growth over time, which can otherwise lead to slower searches and unnecessary disk usage.</p>
<p>A limit of around 5 MB is large enough to retain a long and useful command history, while still keeping performance snappy and storage under control. Old entries are automatically discarded when the limit is reached, so no manual cleanup is needed.</p>
<hr />
<h3 id="heading-synconenter-true"><code>sync_on_enter: true</code></h3>
<p>With this enabled, each command is <strong>written to history as soon as you press Enter</strong>, not when the shell exits. This is particularly useful if:</p>
<ul>
<li><p>You run multiple Nushell instances in parallel</p>
</li>
<li><p>A shell crashes or is force-closed</p>
</li>
<li><p>You frequently open and close terminals</p>
</li>
</ul>
<p>You don’t lose recent commands, and other running shells can immediately see the updated history.</p>
<hr />
<h3 id="heading-isolation-true"><code>isolation: true</code></h3>
<p>History isolation ensures that <strong>each shell session maintains its own logical history context</strong>. Commands executed in one session do not instantly bleed into another, which avoids confusing or irrelevant suggestions when navigating history.</p>
<p>This is especially valuable when working on multiple projects at once or when using tools like tmux or zellij with many panes open. You get history that reflects <em>what you were doing in that session</em>, not noise from elsewhere.</p>
<h1 id="heading-vi-mode">Vi mode</h1>
<p>Nushell supports both <strong>emacs</strong> and <strong>vi</strong> editing modes for the command line, but enabling vi mode can be significantly more ergonomic for long-term, keyboard-driven workflows.</p>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.edit_mode = <span class="hljs-string">"vi"</span>
<span class="hljs-variable">$env</span>.config.cursor_shape = {
  vi_insert: line
  vi_normal: block
}
<span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_INSERT = <span class="hljs-string">"&gt; "</span>
<span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_NORMAL = <span class="hljs-string">"$ "</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765714993372/ca19e56b-38d6-45af-a1e4-38e39b8897d4.gif" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-why-vi-mode-is-more-ergonomic">Why vi mode is more ergonomic</h3>
<p>The key ergonomic advantage of vi mode is that it allows you to <strong>navigate and edit text without moving your hands away from the home row</strong>—the default resting position of your fingers on the keyboard (ASDF / JKL;).</p>
<p>In emacs mode, navigating across words or jumping around a command often involves:</p>
<ul>
<li><p>Modifier-heavy key combinations (<code>Ctrl + …</code>)</p>
</li>
<li><p>Or reaching for the arrow keys</p>
</li>
</ul>
<p>Both actions break hand positioning and slow down editing. Vi mode, by contrast, uses single-key motions (<code>w</code>, <code>b</code>, <code>e</code>, <code>0</code>, <code>$</code>, etc.) that keep your hands anchored on the home row, making movement faster and less fatiguing over time.</p>
<hr />
<h3 id="heading-clear-separation-between-modes">Clear separation between modes</h3>
<p>Vi mode introduces a strict distinction between <strong>insert mode</strong> (typing text) and <strong>normal mode</strong> (navigating and editing). While this requires a small mental adjustment at first, it enables far more precise and efficient command-line editing once the muscle memory is built.</p>
<p>To make this distinction obvious, the configuration uses <strong>visual cues</strong>.</p>
<hr />
<h3 id="heading-cursor-shape-as-a-mode-indicator">Cursor shape as a mode indicator</h3>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.cursor_shape = {
  vi_insert: line
  vi_normal: block
}
</code></pre>
<p>Changing the cursor shape provides immediate, subconscious feedback about the current mode:</p>
<ul>
<li><p><strong>Line cursor</strong> → insert mode (typing text)</p>
</li>
<li><p><strong>Block cursor</strong> → normal mode (navigation and editing)</p>
</li>
</ul>
<p>This mirrors the behavior of modal editors like Vim and Helix and helps prevent accidental typing in the wrong mode.</p>
<hr />
<h3 id="heading-prompt-indicators-for-extra-clarity">Prompt indicators for extra clarity</h3>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_INSERT = <span class="hljs-string">"&gt; "</span>
<span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_NORMAL = <span class="hljs-string">"$ "</span>
</code></pre>
<p>The prompt itself also changes depending on the active mode. This reinforces the cursor shape signal and makes the current state unmistakable, even in terminals where cursor shape changes might not be obvious or supported.</p>
<h1 id="heading-keybindings">Keybindings</h1>
<h3 id="heading-history-hint-complete-with-ctrl-space"><strong>History Hint Complete with</strong> <code>Ctrl + Space</code></h3>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.keybindings ++= [
  {
     name: history_hint_complete
     modifier: control
     keycode: space
     mode: vi_insert
     event: [
       { send: HistoryHintComplete }
    ]
  }
 ]
</code></pre>
<p>Nushell’s <strong>history suggestions</strong> are one of its most powerful features, and this keybinding improves the overall efficiency of using that feature:</p>
<ul>
<li><p><strong>Ergonomically better than default keybindings</strong>: By binding the <strong>history completion</strong> feature to <code>Ctrl + Space</code>, you avoid the need to <strong>reach for the arrow keys</strong>, which interrupts your typing flow and slows down interaction. Instead, you can use <code>Ctrl + Space</code> as a much more comfortable alternative, keeping your hands anchored on the home row.</p>
<p>  In vi mode, you typically don’t need to move your hands away from the home row for basic operations like navigating the command history, and this keybinding is consistent with that design. Arrow keys require awkward finger stretches, especially when you’re typing fast or working in a split terminal.</p>
</li>
<li><p><strong>Optimized for frequent use</strong>: Since <strong>history completion</strong> in Nushell is a feature you will use <strong>often</strong>, being able to trigger it with a single keystroke (<code>Ctrl + Space</code>) is a huge time-saver. It’s faster and feels more natural than reaching for the up or down arrow keys, particularly when navigating through long histories.</p>
<p>  This feature is particularly important in Nushell because its <strong>suggestions</strong> aren’t just about previous commands, but also about <strong>contextual hints</strong>—making it more likely you will use the history completion multiple times during your workflow.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765715321276/be25ca2f-0d45-4d13-ae3b-3886e50ecc1b.gif" alt class="image--center mx-auto" /></p>
<h1 id="heading-system-trash">System Trash</h1>
<p>Nushell provides a convenient way to handle deleted files through a <strong>system trash configuration</strong>, which can make file deletion <strong>safer and more forgiving</strong>.</p>
<pre><code class="lang-bash"><span class="hljs-variable">$env</span>.config.rm.always_trash = <span class="hljs-literal">true</span>
</code></pre>
<hr />
<h3 id="heading-how-it-works">How it works</h3>
<p>By default, many shells (including basic <code>rm</code> in bash) <strong>permanently delete files</strong>, which can be risky—especially if you accidentally remove something important. With this configuration:</p>
<ul>
<li><p><code>always_trash = true</code> tells Nushell that whenever you use the <code>rm</code> command, files should be sent to the <strong>system trash/recycle bin</strong> instead of being deleted immediately.</p>
</li>
<li><p>The exact behavior depends on the operating system:</p>
<ul>
<li><p><strong>macOS</strong> → Files go to the Trash.</p>
</li>
<li><p><strong>Windows</strong> → Files go to the Recycle Bin.</p>
</li>
<li><p><strong>Linux</strong> → Files go to the user’s trash directory (e.g., <code>~/.local/share/Trash</code>), following the <a target="_blank" href="http://FreeDesktop.org">FreeDesktop.org</a> specification.</p>
</li>
</ul>
</li>
</ul>
<p>This means you can safely undo deletions, restore files, or review them before permanently removing them.</p>
<h1 id="heading-true-color">True color</h1>
<p>Enabling <strong>true color</strong> in Nushell is a small but important configuration, especially when working <strong>remotely</strong> on a server using a <strong>text-mode editor</strong> like <strong>Helix</strong> or <strong>Neovim</strong>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># enable true color</span>
<span class="hljs-variable">$env</span>.COLORTERM = <span class="hljs-string">"truecolor"</span>
</code></pre>
<hr />
<h3 id="heading-what-it-does">What it does</h3>
<ul>
<li><p><code>COLORTERM</code> is an environment variable that tells applications the terminal supports <strong>24-bit color</strong>, also known as <strong>true color</strong>.</p>
</li>
<li><p>Setting it to <code>"truecolor"</code> ensures that both the shell and terminal-based applications can use the <strong>full RGB color range</strong>, rather than being limited to 16 or 256 colors.</p>
</li>
</ul>
<hr />
<h3 id="heading-why-its-important-for-remote-editing">Why it’s important for remote editing</h3>
<ol>
<li><p><strong>Consistent color rendering</strong><br /> When you SSH into a server, the terminal may not automatically detect true color support. Without this variable, editors like Helix or Neovim might fall back to limited color schemes, making syntax highlighting less informative or harder to read.</p>
</li>
<li><p><strong>Better syntax highlighting</strong><br /> True color allows modern editors to use rich themes accurately, so things like code highlighting, git diff colors, or syntax error markers appear <strong>exactly as intended</strong>.</p>
</li>
<li><p><strong>Improved terminal experience</strong><br /> Even outside the editor, Nushell itself can render colored outputs (like tables, errors, or commands) with full fidelity, which is particularly useful for scripts, logs, or pipelines with multiple colors.</p>
</li>
<li><p><strong>Cross-terminal consistency</strong><br /> By explicitly setting <code>COLORTERM=truecolor</code>, you ensure that the color behavior is <strong>consistent</strong> whether you’re on <strong>Alacritty, Windows Terminal, iTerm2, or a remote SSH session</strong>. This avoids surprises when moving between local and remote environments.</p>
</li>
</ol>
<h1 id="heading-summary">Summary</h1>
<p>With these configurations in place, Nushell transforms from a minimal, out-of-the-box shell into a <strong>powerful, ergonomic, and modern command-line environment</strong>. We’ve set up a minimal, distraction-free prompt, enabled vi-mode with clear visual cues, optimized keybindings for efficiency, configured a reliable and fast history system, ensured safe file deletion with system trash, and enabled true color for a rich, consistent visual experience—even on remote servers. Each tweak focuses on reducing friction, improving productivity, and letting the shell work <strong>for you</strong> rather than demanding constant attention. The result is a Nushell setup that is not only <strong>functional and safe</strong>, but also <strong>comfortable and enjoyable</strong> to use day after day, whether you’re navigating files, writing scripts, or managing multiple projects across platforms.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># minimal prompt</span>
<span class="hljs-variable">$env</span>.PROMPT_COMMAND = <span class="hljs-string">""</span>

<span class="hljs-comment"># disable right prompt</span>
<span class="hljs-variable">$env</span>.PROMPT_COMMAND_RIGHT = <span class="hljs-string">""</span>

<span class="hljs-comment"># setup default editor</span>
<span class="hljs-variable">$env</span>.config.buffer_editor = <span class="hljs-string">"nvim"</span>

<span class="hljs-comment"># disable banner</span>
<span class="hljs-variable">$env</span>.config.show_banner = <span class="hljs-literal">false</span>

<span class="hljs-comment"># enable system trash</span>
<span class="hljs-variable">$env</span>.config.rm.always_trash = <span class="hljs-literal">true</span>

<span class="hljs-comment"># setup history</span>
<span class="hljs-variable">$env</span>.config.history = {
  file_format: sqlite
  max_size: 5_000_000
  sync_on_enter: <span class="hljs-literal">true</span>
  isolation: <span class="hljs-literal">true</span>
}

<span class="hljs-comment"># enable vi mode</span>
<span class="hljs-variable">$env</span>.config.edit_mode = <span class="hljs-string">"vi"</span>
<span class="hljs-variable">$env</span>.config.cursor_shape = {
  vi_insert: line
  vi_normal: block
}
<span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_INSERT = <span class="hljs-string">"&gt; "</span>
<span class="hljs-variable">$env</span>.PROMPT_INDICATOR_VI_NORMAL = <span class="hljs-string">"$ "</span>

<span class="hljs-comment"># key bindings</span>
<span class="hljs-variable">$env</span>.config.keybindings ++= [
  {
     name: history_hint_complete
     modifier: control
     keycode: space
     mode: vi_insert
     event: [
       { send: HistoryHintComplete }
    ]
  }
 ]

<span class="hljs-comment"># enable true color</span>
<span class="hljs-variable">$env</span>.COLORTERM = <span class="hljs-string">"truecolor"</span>
</code></pre>
<p>However, Nushell stands out as a <strong>highly configurable shell</strong>, allowing you to tailor almost every aspect of its behavior, appearance, and interaction to your workflow. From prompts and keybindings to history, editor integration, and system behavior, almost every part of the shell can be adjusted to suit your preferences.</p>
<p>If you want to explore all the available configuration options, Nushell provides a <strong>built-in documentation command</strong>:</p>
<pre><code class="lang-bash">config nu --doc | nu-highlight
</code></pre>
<p>Thank you for taking the time to read this post all the way through! I hope you found the tips and configurations useful for making Nushell a more productive and enjoyable shell. Be sure to <strong>come back for future posts</strong>, where I’ll share more insights, tricks, and practical setups to help you get even more out of your command-line workflow.</p>
]]></content:encoded></item><item><title><![CDATA[The Best AI Plugin for Neovim]]></title><description><![CDATA[As a Neovim user, you probably know the value of a great code completion plugin. AI-powered code assistants can significantly boost your productivity by offering context-aware suggestions, autocompletion, and even refactoring recommendations. I recen...]]></description><link>https://textmode.dev/the-best-ai-plugin-for-neovim</link><guid isPermaLink="true">https://textmode.dev/the-best-ai-plugin-for-neovim</guid><category><![CDATA[neovim]]></category><category><![CDATA[AI]]></category><category><![CDATA[plugins]]></category><category><![CDATA[codeium]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Wed, 04 Sep 2024 16:13:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725465207255/933ccaea-4139-44c8-8ca3-1590855ff4f0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a Neovim user, you probably know the value of a great code completion plugin. AI-powered code assistants can significantly boost your productivity by offering context-aware suggestions, autocompletion, and even refactoring recommendations. I recently explored several AI plugins for Neovim, and while there are quite a few interesting options, I found one that clearly stood out: <strong>Codeium</strong>.</p>
<p>Let me walk you through my evaluation process and explain why Codeium is the best AI plugin for Neovim compared to other popular options like the ChatGPT plugin and GitHub Copilot.</p>
<h3 id="heading-evaluating-ai-plugins-for-neovim">Evaluating AI Plugins for Neovim</h3>
<p>When evaluating AI plugins for Neovim, I focused on several key factors:</p>
<ul>
<li><p><strong>Cost</strong>: Is the plugin free, or does it require a subscription?</p>
</li>
<li><p><strong>Ease of use</strong>: How smoothly does the plugin integrate with Neovim's workflow?</p>
</li>
<li><p><strong>Flexibility</strong>: Can you iterate through different suggestions?</p>
</li>
<li><p><strong>Compatibility</strong>: Does the plugin play nicely with other Neovim features, like the built-in Language Server Protocol (LSP) autocompletion?</p>
</li>
</ul>
<h3 id="heading-the-contenders-chatgpt-github-copilot-and-codeium">The Contenders: ChatGPT, GitHub Copilot, and Codeium</h3>
<h4 id="heading-1-chatgpt-plugin">1. ChatGPT Plugin</h4>
<p>The ChatGPT plugin is an interesting option that integrates OpenAI's powerful language model directly into Neovim. However, there is a significant downside: <strong>it requires an OpenAI API key</strong>, which isn't free. The cost can quickly add up, especially if you're a heavy user. Additionally, while the ChatGPT plugin offers impressive suggestions, it lacks some of the seamless integration that other plugins provide, especially when it comes to iterating through multiple suggestions in real-time.</p>
<h4 id="heading-2-github-copilot">2. GitHub Copilot</h4>
<p>GitHub Copilot is another popular choice that uses OpenAI's Codex model to provide code suggestions directly in your editor. However, Copilot also comes with a cost. <strong>It requires a subscription</strong>, and while it offers powerful autocompletion, there are a few drawbacks:</p>
<ul>
<li><p><strong>Lack of Iteration Control</strong>: Copilot's completion system does not provide an easy way to iterate over different suggestions. You get one suggestion at a time, and if you don't like it, you must hit undo or try triggering it again.</p>
</li>
<li><p><strong>Proprietary Nature</strong>: Copilot's integration is somewhat closed, and you have less flexibility in terms of configuration and customization.</p>
</li>
</ul>
<p>Overall, while Copilot is a strong tool, its subscription requirement and limited control over suggestions make it less appealing for Neovim users who want a free, flexible solution.</p>
<h3 id="heading-the-winner-codeium">The Winner: Codeium</h3>
<p>After testing the options, I found that <strong>Codeium</strong> was the clear winner among AI plugins for Neovim. Here's why:</p>
<h4 id="heading-1-its-free">1. <strong>It's Free!</strong></h4>
<p>First and foremost, Codeium is <strong>completely free</strong>. You don't need to buy an API key or pay for a subscription to use it. This is a huge advantage over both the ChatGPT plugin and GitHub Copilot, making it a much more accessible option for everyone, from hobbyists to professional developers.</p>
<h4 id="heading-2-flexible-suggestions">2. <strong>Flexible Suggestions</strong></h4>
<p>Codeium allows you to <strong>iterate over multiple suggestions</strong> seamlessly. Unlike GitHub Copilot, which only shows one suggestion at a time, Codeium provides a list of suggestions that you can quickly navigate through. This makes it much easier to find the best suggestion for your code without disrupting your flow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725451565288/b8c3cab1-9740-453d-a5b8-5b3366ca2637.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-3-integration-with-neovims-ghost-text">3. <strong>Integration with Neovim's Ghost Text</strong></h4>
<p>Codeium makes use of Neovim's <strong>ghost text</strong> feature, which allows code suggestions to appear in a subtle, non-intrusive manner. This is a big deal because it means that Codeium's suggestions do not collide with the LSP (Language Server Protocol) autocompletion menu. Instead, they work harmoniously together, allowing you to enjoy both Codeium's AI-powered suggestions and Neovim's native LSP completions without any conflicts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725451482428/912e2650-baf9-4dff-8b09-219f190a6f3b.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-4-easy-to-install-and-configure">4. <strong>Easy to Install and Configure</strong></h4>
<p>Codeium is also straightforward to install and configure in Neovim. Whether you're using <code>packer.nvim</code>, <code>vim-plug</code>, or another plugin manager, the setup process is simple, and the configuration options are flexible enough to suit different workflows.</p>
<h3 id="heading-how-to-get-started-with-codeium-in-neovim">How to Get Started with Codeium in Neovim</h3>
<p>If you're ready to try Codeium, here's a quick guide to get you started:</p>
<ol>
<li><p><strong>Install Codeium</strong>: Add the following line to your plugin manager configuration. For example, if you use <code>lazy.nvim</code>, create a plugin file for Codeium (e.g.: <code>lua/plugins/codeium.lua</code>):</p>
<pre><code class="lang-plaintext"> local Plugin = {'Exafunction/codeium.vim'}
 -- Call :Codeium Auth after installation to get Token ID
</code></pre>
</li>
<li><p><strong>Enable the plugin on demand</strong>: I follow a minimalistic approach when it comes to Neovim plugins, as you can see in my <a target="_blank" href="https://t.co/CSEJEmUrU2">book about Neovim</a>, to avoid cluttering Neovim with too many plugins.</p>
<p> <a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765790582088/ca3a4ab5-125b-4383-a8ec-3c0b524d23ff.png" alt class="image--center mx-auto" /></a></p>
<p> That's why I've defined a key mapping to load the Codeium plugin only when you toggle it:</p>
<pre><code class="lang-plaintext"> Plugin.cmd = {'CodeiumToggle'}

 -- Toggle Codeium
 vim.keymap.set('n', '&lt;leader&gt;&lt;CR&gt;', ':CodeiumToggle&lt;CR&gt;')

 function Plugin.config()
   -- Disabled by default
   vim.g.codeium_enabled = false
</code></pre>
</li>
<li><p><strong>Setup Codeium Chat</strong>: Codeium also provides a prompt through their web site, that you can use similarly to ChatGPT for elaborated questions, like creating a unit test for a function.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725452788002/dcb4c388-6ae0-45ec-acc7-65666963aa46.png" alt class="image--center mx-auto" /></p>
<p> As Codeium Chat is exposed by a function in this plugin, I prefer to create a user command to call it, so that it is easier to start the chat:</p>
<pre><code class="lang-plaintext"> function Plugin.config()
   -- 

   -- Codeium Chat
   vim.api.nvim_create_user_command('CodeiumChat', function(opts)
     vim.api.nvim_call_function("codeium#Chat", {})
   end,
   {})
</code></pre>
</li>
<li><p><strong>Key mappings</strong>: the main actions that you need when you are interacting with Codeium are iterating through suggestions, accept a suggestion and clean the suggestions. I use three key mappings that allow you to access these functionalities in a comfortable way while you are typing. But you could choose whatever key mappings that suits you better:</p>
<pre><code class="lang-plaintext">   -- Key bindings
   vim.g.codeium_no_map_tab = true
   vim.keymap.set('i', '&lt;C-l&gt;', function () return vim.fn['codeium#Accept']() end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-j&gt;', function() return vim.fn['codeium#CycleCompletions'](1) end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-k&gt;', function() return vim.fn['codeium#CycleCompletions'](-1) end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-d&gt;', function() return vim.fn['codeium#Clear']() end, { expr = true, silent = true })
</code></pre>
</li>
<li><p><strong>Status line</strong>: One of the things that I like the most about the Codeium plugin is its ability to report information on the status line. That way you can see whether the Codeium plugin is activated or not. The total number of suggestions provided by Codeium. And the current suggestion that is being displayed.</p>
<pre><code class="lang-plaintext"> -- Function to wrap the codeium#GetStatusString Vimscript function
 function get_codeium_status()
   return vim.fn['codeium#GetStatusString']()
 end

 function Plugin.config()
   -- 
   -- Add Codeium status to the statusline
   vim.o.statusline = table.concat({
     "%f",                       -- Full file path
     " %h",                      -- Help flag
     " %m",                      -- Modified flag
     " %r",                      -- Readonly flag
     "%=",                       -- Right aligned
     " %y ",                     -- File type
     "%{&amp;ff} ",                  -- File format
     " %p%%",                    -- File position percentage
     " %l:%c ",                  -- Line and column number
     " [Codeium:%{v:lua.get_codeium_status()}]", -- Codeium status
   })
 end
</code></pre>
</li>
<li><p><strong>Complete configuration</strong>: putting all these pieces together we end up having this configuration for the Codeium plugin:</p>
<pre><code class="lang-plaintext"> local Plugin = {'Exafunction/codeium.vim'}
 -- Call :Codeium Auth after installation to get Token ID

 Plugin.cmd = {'CodeiumToggle'}

 -- Toggle Codeium
 vim.keymap.set('n', '&lt;leader&gt;&lt;CR&gt;', ':CodeiumToggle&lt;CR&gt;')

 -- Function to wrap the codeium#GetStatusString Vimscript function
 function get_codeium_status()
   return vim.fn['codeium#GetStatusString']()
 end

 function Plugin.config()
   -- Disabled by default
   vim.g.codeium_enabled = false

   -- Codeium Chat
   vim.api.nvim_create_user_command('CodeiumChat', function(opts)
     vim.api.nvim_call_function("codeium#Chat", {})
   end,
   {})

   -- Key bindings
   vim.g.codeium_no_map_tab = true
   vim.keymap.set('i', '&lt;C-l&gt;', function () return vim.fn['codeium#Accept']() end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-j&gt;', function() return vim.fn['codeium#CycleCompletions'](1) end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-k&gt;', function() return vim.fn['codeium#CycleCompletions'](-1) end, { expr = true, silent = true })
   vim.keymap.set('i', '&lt;C-d&gt;', function() return vim.fn['codeium#Clear']() end, { expr = true, silent = true })

   -- Add Codeium status to the statusline
   vim.o.statusline = table.concat({
     "%f",                       -- Full file path
     " %h",                      -- Help flag
     " %m",                      -- Modified flag
     " %r",                      -- Readonly flag
     "%=",                       -- Right aligned
     " %y ",                     -- File type
     "%{&amp;ff} ",                  -- File format
     " %p%%",                    -- File position percentage
     " %l:%c ",                  -- Line and column number
     " [Codeium:%{v:lua.get_codeium_status()}]", -- Codeium status
   })
 end

 return Plugin
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725466190695/aa914506-522a-446b-8a48-f80d851051c3.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>While there are several AI plugins available for Neovim, <strong>Codeium</strong> stands out as the best option. It's free, flexible, and integrates perfectly with Neovim's existing autocompletion features. Unlike other alternatives, such as the ChatGPT plugin or GitHub Copilot, Codeium doesn't require a subscription or API key, and it gives you full control over the suggestions it provides.</p>
<p>So, if you're looking for an AI-powered code assistant for Neovim, give Codeium a try—you won't be disappointed!</p>
<p>Happy coding!</p>
]]></content:encoded></item><item><title><![CDATA[C async/await - Part 4]]></title><description><![CDATA[Introduction
Whoa! It's been a long road to get here. But that's just an evidence about the complexities which lie behind this technique. In this post we are going to glue all the pieces that we have been developing through the previous posts to get ...]]></description><link>https://textmode.dev/c-async-await-part-4</link><guid isPermaLink="true">https://textmode.dev/c-async-await-part-4</guid><category><![CDATA[async/await]]></category><category><![CDATA[asynchronous]]></category><category><![CDATA[C]]></category><category><![CDATA[Event Loop]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sat, 02 Mar 2024 10:04:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708946792115/f12020eb-9437-4f78-8565-0bb0fd1a13d2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Whoa! It's been a long road to get here. But that's just an evidence about the complexities which lie behind this technique. In this post we are going to glue all the pieces that we have been developing through the previous posts to get a basic implementation of async/await. Although it is just a toy experiment to let you understand how this technique can be implemented, it provides you with enough functionality to analyze different scenarios and check how it behaves. It even allows you to chain async calls, so that you can test nested asynchronous functions.</p>
<p>In this example we will use terms that you can find with another names in other programming languages. For example, the term <strong>Coroutine</strong> that we will be using would equate to the term <strong>Future</strong> (used in Dart) or <strong>Promise</strong> (used in Javascript). And any function receiving a Coroutine parameter, would be the equivalent to a function marked as <strong>async</strong>.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">loadJson</span>(<span class="hljs-params">url</span>) </span>{
  <span class="hljs-keyword">let</span> response = <span class="hljs-keyword">await</span> fetch(url);
  <span class="hljs-keyword">if</span> (response.status == <span class="hljs-number">200</span>) {
    <span class="hljs-keyword">return</span> response.json();
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> HttpError(response);
  }
}
</code></pre>
<p>The main difference with respect to the previous <a target="_blank" href="https://terminalprogrammer.com/zig-for-c-programmers-asyncawait-part-3">post</a>, is that we will store coroutines in the Thread Pool queue (work queue in the previous post) and the Event Loop queue (done queue in the previous post). As coroutines are resumable, we don't need to carry a couple of callbacks as we did before.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708946624060/6bb59a1b-0e5d-4747-bd74-dd68254cfc9c.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">ifndef</span> __LOOP__</span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">define</span> __LOOP__</span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">"coroutine.h"</span></span>

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">EventLoop</span> <span class="hljs-title">EventLoop</span>;</span>

<span class="hljs-function">EventLoop* <span class="hljs-title">new_EventLoop</span><span class="hljs-params">()</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_EventLoop</span><span class="hljs-params">(EventLoop* l)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_submit</span><span class="hljs-params">(EventLoop* l, CoroutineFn fn)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_run</span><span class="hljs-params">(EventLoop* l)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_stop</span><span class="hljs-params">(EventLoop* l)</span></span>;

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Task</span> {</span>
        CoroutineFn fn;
        Coroutine*  parent;
        <span class="hljs-keyword">void</span>*       data;
} Task;
<span class="hljs-function">Coroutine* <span class="hljs-title">EventLoop_async</span><span class="hljs-params">(EventLoop* l, Task task)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">EventLoop_await</span><span class="hljs-params">(EventLoop* l, Coroutine* co)</span></span>;

<span class="hljs-meta">#<span class="hljs-meta-keyword">endif</span></span>
</code></pre>
<p>Once a coroutine is done, we can resume the execution of its parent coroutine. This leads us to the next big difference. But in this case it is related to the first <a target="_blank" href="https://terminalprogrammer.com/zig-for-c-programmers-asyncawait-part-1">post</a>. In this implementation, we are going to extend the definition of coroutines to keep track of their parent coroutines. This technique will allow us to nest coroutine calls.</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">ifndef</span> __COROUTINE__</span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">define</span> __COROUTINE__</span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdbool.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;ucontext.h&gt;</span></span>

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Coroutine</span> <span class="hljs-title">Coroutine</span>;</span>
<span class="hljs-keyword">typedef</span> <span class="hljs-keyword">void</span>* (*CoroutineFn)(Coroutine*);

<span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Coroutine</span> {</span>
    Coroutine*  parent;      <span class="hljs-comment">// New field</span>
    CoroutineFn fn;
    <span class="hljs-keyword">ucontext_t</span>  caller_ctx;
    <span class="hljs-keyword">ucontext_t</span>  callee_ctx;
    <span class="hljs-keyword">void</span>*       yield_value;
    <span class="hljs-keyword">bool</span>        finished;
};

<span class="hljs-function">Coroutine* <span class="hljs-title">new_Coroutine</span><span class="hljs-params">(CoroutineFn fn, Coroutine* parent)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">Coroutine_resume</span><span class="hljs-params">(Coroutine* c)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">Coroutine_suspend</span><span class="hljs-params">(Coroutine* c, <span class="hljs-keyword">void</span>* value)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_Coroutine</span><span class="hljs-params">(Coroutine* c)</span></span>;

<span class="hljs-meta">#<span class="hljs-meta-keyword">endif</span></span>
</code></pre>
<p>When we want to submit a new task to the Event Loop, we create a coroutine without a parent, and push it into the Event Loop queue.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_submit</span><span class="hljs-params">(EventLoop* l, CoroutineFn fn)</span> </span>{
        Coroutine* co = new_Coroutine(fn, <span class="hljs-literal">NULL</span>);
        EventLoop_resume(l, co);
}
</code></pre>
<p>That coroutine will be picked up by the Event Loop later on. It will resume the coroutine, and after that it will check whether that coroutine has finished or not. If that is the case and it has a parent, the Event Loop will queue up the parent coroutine to go up the nested calls of coroutines. Otherwise, the Event Loop will delete the coroutine.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_run</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        AtomicBool_set(l-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">while</span> (AtomicBool_get(l-&gt;running)) {
                Coroutine* co = <span class="hljs-literal">NULL</span>;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(l-&gt;<span class="hljs-built_in">queue</span>, &amp;co)) {
                        Coroutine_resume(co);
                        <span class="hljs-keyword">if</span> (co-&gt;finished) {
                                <span class="hljs-keyword">if</span> (co-&gt;parent != <span class="hljs-literal">NULL</span>) {
                                        EventLoop_resume(l, co-&gt;parent);
                                } <span class="hljs-keyword">else</span> {
                                        delete_Coroutine(co);
                                }
                        }
                }
        }
}
</code></pre>
<p>When a coroutine is resumed, we get to execute its code. There, we can delegate the execution of an asynchronous call to the worker threads. To do so, we can call the <strong>async</strong> function. We have to wrap the function that we want to execute, a reference to the current coroutine as parent and a reference to any input data into a Task object that we use as argument for the async function. Then, it will create a new coroutine with the data provided in the Task object, and push it into the Thread Pool queue.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">download_images</span><span class="hljs-params">(Coroutine* co)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image 1\n"</span>, pthread_self());
        <span class="hljs-built_in">string</span> str1;
        Coroutine* c1 = EventLoop_async(loop, (Task){
                        .fn = get_image,
                        .parent = co,
                        .data = str1
                        });
        <span class="hljs-keyword">char</span>* img1 = (<span class="hljs-keyword">char</span>*)EventLoop_await(loop, c1);
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img1);

        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image 2\n"</span>, pthread_self());
        <span class="hljs-built_in">string</span> str2;
        Coroutine* c2 = EventLoop_async(loop, (Task){
                        .fn = get_image,
                        .parent = co,
                        .data = str2
                        });
        <span class="hljs-keyword">char</span>* img2 = (<span class="hljs-keyword">char</span>*)EventLoop_await(loop, c2);
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img2);

        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<pre><code class="lang-c"><span class="hljs-function">Coroutine* <span class="hljs-title">EventLoop_async</span><span class="hljs-params">(EventLoop* l, Task task)</span> </span>{
        Coroutine* c = new_Coroutine(task.fn, task.parent);
        c-&gt;yield_value = task.data;
        ThreadPool_submit(l-&gt;pool, c);
        <span class="hljs-keyword">return</span> c;
}
</code></pre>
<p>The Worker Threads behave similarly to the Event Loop. They pick up a coroutine from the Thread Pool queue and resume it. After that, they will check if that coroutine has finished. If that is the case, they will queue up their parent coroutine into the Event Loop queue.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">ThreadPool_run</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        ThreadPool* p = (ThreadPool*) arg;

        <span class="hljs-keyword">while</span> (AtomicBool_get(p-&gt;running)) {
                Coroutine* co;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(p-&gt;<span class="hljs-built_in">queue</span>, &amp;co)) {
                        Coroutine_resume(co);
                        <span class="hljs-keyword">if</span> (co-&gt;finished) {
                                EventLoop_resume(p-&gt;loop, co-&gt;parent);
                        }
                }
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}
</code></pre>
<p>After triggering an asynchronous call from a coroutine, we can wait for its execution using the <strong>await</strong> function. It will suspend the parent coroutine. And when the execution flow comes back again, it saves the yield value of the current coroutine to return it. But before that, it deletes the coroutine.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">EventLoop_await</span><span class="hljs-params">(EventLoop* l, Coroutine* co)</span> </span>{
        Coroutine_suspend(co-&gt;parent, co-&gt;parent-&gt;yield_value);
        <span class="hljs-keyword">void</span>* result = co-&gt;yield_value;
        delete_Coroutine(co);
        <span class="hljs-keyword">return</span> result;
}
</code></pre>
<h1 id="heading-event-loop">Event Loop</h1>
<p>Here is the complete code of the final Event Loop:</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_resume</span><span class="hljs-params">(EventLoop* l, Coroutine* co)</span></span>;

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">ThreadPool</span> {</span>
        SafeQueue*      <span class="hljs-built_in">queue</span>;
        EventLoop*      loop;
        <span class="hljs-keyword">pthread_t</span>*      threads;
        <span class="hljs-keyword">size_t</span>          nthreads;
        AtomicBool*     running;
} ThreadPool;

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_submit</span><span class="hljs-params">(ThreadPool* p, Coroutine* co)</span> </span>{
        <span class="hljs-keyword">while</span>(!SafeQueue_push(p-&gt;<span class="hljs-built_in">queue</span>, &amp;co));
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span>* <span class="hljs-title">ThreadPool_run</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        ThreadPool* p = (ThreadPool*) arg;

        <span class="hljs-keyword">while</span> (AtomicBool_get(p-&gt;running)) {
                Coroutine* co;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(p-&gt;<span class="hljs-built_in">queue</span>, &amp;co)) {
                        Coroutine_resume(co);
                        <span class="hljs-keyword">if</span> (co-&gt;finished) {
                                EventLoop_resume(p-&gt;loop, co-&gt;parent);
                        }
                }
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_start</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_create(&amp;p-&gt;threads[i], <span class="hljs-literal">NULL</span>, ThreadPool_run, p);
        }
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
ThreadPool* <span class="hljs-title">new_ThreadPool</span><span class="hljs-params">(EventLoop* loop)</span> </span>{
        ThreadPool* p = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(ThreadPool));

        p-&gt;nthreads = NUM_THREADS;
        p-&gt;<span class="hljs-built_in">queue</span>    = new_SafeQueue(<span class="hljs-keyword">sizeof</span>(Coroutine*), QUEUE_SIZE);
        p-&gt;threads  = <span class="hljs-built_in">calloc</span>(p-&gt;nthreads, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">pthread_t</span>));
        p-&gt;loop = loop;
        p-&gt;running = new_AtomicBool();

        ThreadPool_start(p);

        <span class="hljs-keyword">return</span> p;
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">delete_ThreadPool</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">false</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_join(p-&gt;threads[i], <span class="hljs-literal">NULL</span>);
        }
        <span class="hljs-built_in">free</span>(p-&gt;threads);

        delete_SafeQueue(p-&gt;<span class="hljs-built_in">queue</span>);
        delete_AtomicBool(p-&gt;running);

        <span class="hljs-built_in">free</span>(p);
}

<span class="hljs-comment">// *******************************************************</span>

<span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">EventLoop</span> {</span>
        ThreadPool* pool;
        SafeQueue*  <span class="hljs-built_in">queue</span>;
        AtomicBool* running;
};

<span class="hljs-function">EventLoop* <span class="hljs-title">new_EventLoop</span><span class="hljs-params">()</span> </span>{
        EventLoop* l = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(EventLoop));

        l-&gt;pool = new_ThreadPool(l);
        l-&gt;<span class="hljs-built_in">queue</span> = new_SafeQueue(<span class="hljs-keyword">sizeof</span>(Coroutine*), QUEUE_SIZE);
        l-&gt;running = new_AtomicBool();

        <span class="hljs-keyword">return</span> l;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_EventLoop</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        delete_ThreadPool(l-&gt;pool);
        delete_SafeQueue(l-&gt;<span class="hljs-built_in">queue</span>);
        delete_AtomicBool(l-&gt;running);
        <span class="hljs-built_in">free</span>(l);
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_resume</span><span class="hljs-params">(EventLoop* l, Coroutine* co)</span> </span>{
        <span class="hljs-keyword">while</span> (!SafeQueue_push(l-&gt;<span class="hljs-built_in">queue</span>, &amp;co));
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_run</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        AtomicBool_set(l-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">while</span> (AtomicBool_get(l-&gt;running)) {
                Coroutine* co = <span class="hljs-literal">NULL</span>;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(l-&gt;<span class="hljs-built_in">queue</span>, &amp;co)) {
                        Coroutine_resume(co);
                        <span class="hljs-keyword">if</span> (co-&gt;finished) {
                                <span class="hljs-keyword">if</span> (co-&gt;parent != <span class="hljs-literal">NULL</span>) {
                                        EventLoop_resume(l, co-&gt;parent);
                                } <span class="hljs-keyword">else</span> {
                                        delete_Coroutine(co);
                                }
                        }
                }
        }
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_stop</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        AtomicBool_set(l-&gt;running, <span class="hljs-literal">false</span>);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_submit</span><span class="hljs-params">(EventLoop* l, CoroutineFn fn)</span> </span>{
        Coroutine* co = new_Coroutine(fn, <span class="hljs-literal">NULL</span>);
        EventLoop_resume(l, co);
}

<span class="hljs-function">Coroutine* <span class="hljs-title">EventLoop_async</span><span class="hljs-params">(EventLoop* l, Task task)</span> </span>{
        Coroutine* c = new_Coroutine(task.fn, task.parent);
        c-&gt;yield_value = task.data;
        ThreadPool_submit(l-&gt;pool, c);
        <span class="hljs-keyword">return</span> c;
}

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">EventLoop_await</span><span class="hljs-params">(EventLoop* l, Coroutine* co)</span> </span>{
        Coroutine_suspend(co-&gt;parent, co-&gt;parent-&gt;yield_value);
        <span class="hljs-keyword">void</span>* result = co-&gt;yield_value;
        delete_Coroutine(co);
        <span class="hljs-keyword">return</span> result;
}
</code></pre>
<h1 id="heading-example">Example</h1>
<p>And here is the code of the example:</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdio.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdlib.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;string.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;unistd.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;pthread.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;time.h&gt;</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">"loop.h"</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">"coroutine.h"</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">define</span> STR 128</span>
<span class="hljs-keyword">typedef</span> <span class="hljs-keyword">char</span> <span class="hljs-built_in">string</span>[STR];

<span class="hljs-keyword">static</span>
EventLoop* loop;

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">get_image</span><span class="hljs-params">(Coroutine* co)</span> </span>{
        sleep(<span class="hljs-number">1</span>);
        <span class="hljs-keyword">int</span> r = rand();

        <span class="hljs-keyword">char</span> buffer[<span class="hljs-number">16</span>];
        <span class="hljs-built_in">sprintf</span>(buffer, <span class="hljs-string">"%d"</span>, r);

        <span class="hljs-keyword">char</span>* result = (<span class="hljs-keyword">char</span>*)co-&gt;yield_value;
        <span class="hljs-built_in">strcat</span>(result, <span class="hljs-string">"image id: "</span>);
        <span class="hljs-built_in">strcat</span>(result, buffer);

        <span class="hljs-keyword">return</span> result;
}

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">download_images</span><span class="hljs-params">(Coroutine* co)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image 1\n"</span>, pthread_self());
        <span class="hljs-built_in">string</span> str1;
        Coroutine* c1 = EventLoop_async(loop, (Task){
                        .fn = get_image,
                        .parent = co,
                        .data = str1
                        });
        <span class="hljs-keyword">char</span>* img1 = (<span class="hljs-keyword">char</span>*)EventLoop_await(loop, c1);
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img1);

        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image 2\n"</span>, pthread_self());
        <span class="hljs-built_in">string</span> str2;
        Coroutine* c2 = EventLoop_async(loop, (Task){
                        .fn = get_image,
                        .parent = co,
                        .data = str2
                        });
        <span class="hljs-keyword">char</span>* img2 = (<span class="hljs-keyword">char</span>*)EventLoop_await(loop, c2);
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img2);

        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">say_hello</span><span class="hljs-params">(Coroutine* co)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] Hello\n"</span>, pthread_self());
        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">run_cli</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-keyword">char</span>* line = <span class="hljs-literal">NULL</span>;
        <span class="hljs-keyword">size_t</span> len = <span class="hljs-number">0</span>;
        <span class="hljs-keyword">ssize_t</span> read;
        <span class="hljs-keyword">while</span> ((read = getline(&amp;line, &amp;len, <span class="hljs-built_in">stdin</span>)) != <span class="hljs-number">-1</span>) {
                <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"quit"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                        EventLoop_stop(loop);
                        <span class="hljs-keyword">break</span>;
                } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"download"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                        EventLoop_submit(loop, download_images);
                } <span class="hljs-keyword">else</span> {
                        EventLoop_submit(loop, say_hello);
                }
        }
        <span class="hljs-keyword">if</span> (line != <span class="hljs-literal">NULL</span>) {
                <span class="hljs-built_in">free</span>(line);
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">test_eventLoop</span><span class="hljs-params">()</span> </span>{
        loop = new_EventLoop();

        <span class="hljs-keyword">pthread_t</span> t;
        pthread_create(&amp;t, <span class="hljs-literal">NULL</span>, run_cli, <span class="hljs-literal">NULL</span>);

        EventLoop_run(loop);

        delete_EventLoop(loop);

        pthread_join(t, <span class="hljs-literal">NULL</span>);
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
        srand(time(<span class="hljs-literal">NULL</span>));

        test_eventLoop();

        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>And the corresponding output:</p>
<pre><code class="lang-bash">[139925417781056] Hello
download
[139925417781056] download image 1
[139925417781056] image id: 648455896
[139925417781056] download image 2
[139925417781056] image id: 39422855
</code></pre>
<h1 id="heading-conclusion">Conclusion</h1>
<p>As you can see, the client code is much easier to read using async/await. And it also helps us to manage memory. In this example we are using an static allocation of memory. But we could also use dynamic memory allocation, and it would be easy to match allocation and deallocation of memory, as both of them could happen in the same function.</p>
<p>And that's all. I hope you have enjoyed this journey through the internals of a toy async/await implementation. It definitely helped me understand how its logic works, which is something that is not pretty straightforward the first time you find out about this pattern. Thank you very much if you made it till the end of this series. And I would very much appreciate any comment or suggestion that you may have about it. See you in the next post!</p>
]]></content:encoded></item><item><title><![CDATA[C async/await - Part 3]]></title><description><![CDATA[We saw on the part 2 of this series how to implement a basic Thread Pool. But there was a problem when we wanted to report values to the standard output from multiple threads, because there was no synchronization among them. And I mentioned that the ...]]></description><link>https://textmode.dev/c-async-await-part-3</link><guid isPermaLink="true">https://textmode.dev/c-async-await-part-3</guid><category><![CDATA[Event Loop]]></category><category><![CDATA[ThreadPools]]></category><category><![CDATA[async]]></category><category><![CDATA[async/await]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 25 Feb 2024 13:47:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708868831935/5b170b11-bc63-4596-b33e-7926d4df2fa4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We saw on the <a target="_blank" href="https://terminalprogrammer.com/zig-for-c-programmers-asyncawait-part-2">part 2</a> of this series how to implement a basic Thread Pool. But there was a problem when we wanted to report values to the standard output from multiple threads, because there was no synchronization among them. And I mentioned that the solution for that would be the use of an Event Loop.</p>
<p>An Event Loop in a UI framework is a critical component responsible for managing and handling various events, such as user input (like mouse clicks or keyboard presses), system notifications, and other asynchronous operations. It ensures that the events are processes sequentially one by one, avoiding any overlapping among them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708705111784/6688dfac-530d-4a17-908b-467259b556f8.png" alt class="image--center mx-auto" /></p>
<p>In the case of asynchronous programming (i.e.: processing done by other threads), if we send the output generated by the worker threads back to the Event Loop, we are also able to sequence the processing of those outputs. Thus, avoiding any overlapping. The point being is that as an Event Loop is a single thread, we can use it as synchronization point whenever we need to sequence the processing of multiple sources of information.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708705184100/828aeaa3-6400-4a9f-8e9b-fa674be4ff3e.png" alt class="image--center mx-auto" /></p>
<p>One of the most famous Event Loops implementations is <a target="_blank" href="https://docs.libuv.org/en/v1.x/design.html"><strong>libuv</strong></a>, which is a library used for implementing an event-driven asynchronous I/O model in <strong>Node.js</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708706176444/6470fa7e-3a6e-4fc1-b311-dffac80f1048.png" alt class="image--center mx-auto" /></p>
<p>As you can see, every task in <strong>libuv</strong> is linked to a couple of callbacks. The <strong>work_cb</strong>, which is executed by the worker thread that processes that task. And the <strong>after_work_cb</strong>, which is executed by the Event Loop thread when it picks up a task from the done queue.</p>
<p>We can follow a similar approach of using a pair of callbacks for each event or task to be executed. But instead of using an input queue and a done queue, we can merge both of them together to simplify the implementation. So the user input events will be mixed with the done tasks in the same queue. This is just a simplification to keep the code shorter and easier to read in this example.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708706463310/89fda56a-1590-4641-bdb7-35f4ea6047d4.png" alt class="image--center mx-auto" /></p>
<p>Starting from our basic Thread Pool implementation, we can develop a basic Event Loop on top of that as follows. It will just basically provide a run and stop functions to start and stop the Event Loop. But the key functions are submit and async.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">typedef</span> <span class="hljs-title">void</span> <span class="hljs-params">(*TaskFn)</span><span class="hljs-params">(<span class="hljs-keyword">void</span>*)</span></span>;

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Task</span> {</span>
        TaskFn cb;
        TaskFn done_cb;
        <span class="hljs-keyword">void</span>*  data;
} Task;

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">EventLoop</span> <span class="hljs-title">EventLoop</span>;</span>

<span class="hljs-function">EventLoop* <span class="hljs-title">new_EventLoop</span><span class="hljs-params">()</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_EventLoop</span><span class="hljs-params">(EventLoop* l)</span></span>;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_submit</span><span class="hljs-params">(EventLoop* l, Task task)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_async</span><span class="hljs-params">(EventLoop* l, Task task)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_run</span><span class="hljs-params">(EventLoop* l)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_stop</span><span class="hljs-params">(EventLoop* l)</span></span>;
</code></pre>
<p>The <strong>submit</strong> function will insert a task into the done queue, for the Event Loop thread to process it. Both the user input thread, as well as the worker threads will use this function to insert tasks into that queue.</p>
<p>Whereas the <strong>async</strong> function will insert a task into the work queue, so that it gets picked up by a worker thread for its processing. This function will be called from the Event Loop thread.</p>
<pre><code class="lang-c"><span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">ThreadPool</span> {</span>
        SafeQueue*    <span class="hljs-built_in">queue</span>;
        EventLoop*    loop;
        <span class="hljs-keyword">pthread_t</span>*    threads;
        <span class="hljs-keyword">size_t</span>        nthreads;
        AtomicBool*   running;
} ThreadPool;

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span>* <span class="hljs-title">ThreadPool_run</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        ThreadPool* p = (ThreadPool*) arg;

        <span class="hljs-keyword">while</span> (AtomicBool_get(p-&gt;running)) {
                Task task;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(p-&gt;<span class="hljs-built_in">queue</span>, &amp;task)) {
                        task.cb(task.data);
                        EventLoop_submit(p-&gt;loop, task);
                }
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_start</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_create(&amp;p-&gt;threads[i], <span class="hljs-literal">NULL</span>, ThreadPool_run, p);
        }
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
ThreadPool* <span class="hljs-title">new_ThreadPool</span><span class="hljs-params">(EventLoop* loop)</span> </span>{
        ThreadPool* p = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(ThreadPool));

        p-&gt;nthreads = NUM_THREADS;
        p-&gt;<span class="hljs-built_in">queue</span>    = new_SafeQueue(<span class="hljs-keyword">sizeof</span>(Task), QUEUE_SIZE);
        p-&gt;threads  = <span class="hljs-built_in">calloc</span>(p-&gt;nthreads, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">pthread_t</span>));
        p-&gt;loop = loop;
        p-&gt;running = new_AtomicBool();

        ThreadPool_start(p);

        <span class="hljs-keyword">return</span> p;
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">delete_ThreadPool</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">false</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_join(p-&gt;threads[i], <span class="hljs-literal">NULL</span>);
        }
        <span class="hljs-built_in">free</span>(p-&gt;threads);

        delete_SafeQueue(p-&gt;<span class="hljs-built_in">queue</span>);
        delete_AtomicBool(p-&gt;running);

        <span class="hljs-built_in">free</span>(p);
}

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_submit</span><span class="hljs-params">(ThreadPool* p, Task task)</span> </span>{
        <span class="hljs-keyword">while</span>(!SafeQueue_push(p-&gt;<span class="hljs-built_in">queue</span>, &amp;task));
}

<span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">EventLoop</span> {</span>
        ThreadPool* pool;
        SafeQueue*  <span class="hljs-built_in">queue</span>;
        AtomicBool* running;
};

<span class="hljs-function">EventLoop* <span class="hljs-title">new_EventLoop</span><span class="hljs-params">()</span> </span>{
        EventLoop* l = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(EventLoop));

        l-&gt;pool = new_ThreadPool(l);
        l-&gt;<span class="hljs-built_in">queue</span> = new_SafeQueue(<span class="hljs-keyword">sizeof</span>(Task), QUEUE_SIZE);
        l-&gt;running = new_AtomicBool();

        <span class="hljs-keyword">return</span> l;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_EventLoop</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        delete_ThreadPool(l-&gt;pool);
        delete_SafeQueue(l-&gt;<span class="hljs-built_in">queue</span>);
        delete_AtomicBool(l-&gt;running);
        <span class="hljs-built_in">free</span>(l);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_run</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        AtomicBool_set(l-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">while</span> (AtomicBool_get(l-&gt;running)) {
                Task task;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(l-&gt;<span class="hljs-built_in">queue</span>, &amp;task)) {
                        task.done_cb(task.data);
                }
        }
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_submit</span><span class="hljs-params">(EventLoop* l, Task task)</span> </span>{
        <span class="hljs-keyword">while</span> (!SafeQueue_push(l-&gt;<span class="hljs-built_in">queue</span>, &amp;task));
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_async</span><span class="hljs-params">(EventLoop* l, Task task)</span> </span>{
        ThreadPool_submit(l-&gt;pool, task);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">EventLoop_stop</span><span class="hljs-params">(EventLoop* l)</span> </span>{
        AtomicBool_set(l-&gt;running, <span class="hljs-literal">false</span>);
}
</code></pre>
<p>We can now code a simple command line example that exercises our implementation. It will accept a "<strong>download</strong>" command to simulate a long running job that we want to delegate to the worker threads for asynchronous programming. Meanwhile, if the user enters any other command (apart from "<strong>quit</strong>"), the application will respond with a "<strong>hello</strong>" message just to confirm that the Event Loop is able to keep processing user input events, while the long running tasks are being processed by the worker threads.</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdio.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdlib.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;string.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;unistd.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;pthread.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;time.h&gt;</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">"loop.h"</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">define</span> STR 128</span>

<span class="hljs-keyword">static</span>
EventLoop* loop;

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">get_image</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* data)</span> </span>{
        <span class="hljs-keyword">char</span>* result = (<span class="hljs-keyword">char</span>*)data;

        <span class="hljs-keyword">int</span> r = rand();
        <span class="hljs-keyword">char</span> buffer[<span class="hljs-number">16</span>];
        <span class="hljs-built_in">sprintf</span>(buffer, <span class="hljs-string">"%d"</span>, r);
        <span class="hljs-built_in">strcat</span>(result, buffer);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">done_get_image</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] BEGIN done_get_image "</span>, pthread_self());

        sleep(<span class="hljs-number">1</span>);
        <span class="hljs-keyword">char</span>* img = (<span class="hljs-keyword">char</span>*)arg;
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"%s "</span>, img);
        <span class="hljs-built_in">free</span>(img);

        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] END done_get_image\n"</span>, pthread_self());
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">download_img</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-keyword">char</span>* str = <span class="hljs-built_in">calloc</span>(STR, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">char</span>));
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image\n"</span>, pthread_self());
        EventLoop_async(loop, (Task){
                        .cb = get_image,
                        .done_cb = done_get_image,
                        .data = str
                        });
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">say_hello</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] Hello\n"</span>, pthread_self());
}

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">run_cli</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-keyword">char</span>* line = <span class="hljs-literal">NULL</span>;
        <span class="hljs-keyword">size_t</span> len = <span class="hljs-number">0</span>;
        <span class="hljs-keyword">ssize_t</span> read;
        <span class="hljs-keyword">while</span> ((read = getline(&amp;line, &amp;len, <span class="hljs-built_in">stdin</span>)) != <span class="hljs-number">-1</span>) {
                <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"quit"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                        EventLoop_stop(loop);
                        <span class="hljs-keyword">break</span>;
                } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"download"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                        EventLoop_submit(loop, (Task){
                                        .cb = <span class="hljs-literal">NULL</span>,
                                        .done_cb = download_img,
                                        .data = <span class="hljs-literal">NULL</span>
                                        });
                } <span class="hljs-keyword">else</span> {
                        EventLoop_submit(loop, (Task){
                                        .cb = <span class="hljs-literal">NULL</span>,
                                        .done_cb = say_hello,
                                        .data = <span class="hljs-literal">NULL</span>
                                        });
                }
        }
        <span class="hljs-keyword">if</span> (line != <span class="hljs-literal">NULL</span>) {
                <span class="hljs-built_in">free</span>(line);
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">test_eventLoop</span><span class="hljs-params">()</span> </span>{
        loop = new_EventLoop();

        <span class="hljs-keyword">pthread_t</span> t;
        pthread_create(&amp;t, <span class="hljs-literal">NULL</span>, run_cli, <span class="hljs-literal">NULL</span>);

        EventLoop_run(loop);

        delete_EventLoop(loop);

        pthread_join(t, <span class="hljs-literal">NULL</span>);
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
        srand(time(<span class="hljs-literal">NULL</span>));

        test_eventLoop();

        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>Unlike the output that we got in the previous <a target="_blank" href="https://terminalprogrammer.com/zig-for-c-programmers-asyncawait-part-2"><strong>post</strong></a>, where the text generated by the worker threads could be overlapped. In this case we cannot run into a situation where the output of one task overlaps another one, because all of them are processed by the single Event Loop thread. Thus, we managed to sequence the processing of the tasks outputs.</p>
<pre><code class="lang-c">[<span class="hljs-number">139901546895168</span>] Hello

[<span class="hljs-number">139901546895168</span>] Hello
<span class="hljs-keyword">do</span>
[<span class="hljs-number">139901546895168</span>] download image
<span class="hljs-keyword">do</span>
[<span class="hljs-number">139901546895168</span>] BEGIN done_get_image <span class="hljs-number">1087146321</span> [<span class="hljs-number">139901546895168</span>] END done_get_image
[<span class="hljs-number">139901546895168</span>] download image
[<span class="hljs-number">139901546895168</span>] BEGIN done_get_image <span class="hljs-number">925167204</span> [<span class="hljs-number">139901546895168</span>] END done_get_image

[<span class="hljs-number">139901546895168</span>] Hello
</code></pre>
<p>However, as you can tell from this ugly code, we have run into something similar to the famous <strong>callback hell</strong> that we can see in the Javascript world. It is really cumbersome to chain asynchronous calls using this strategy.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">done_get_image_1</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-keyword">char</span>* img = (<span class="hljs-keyword">char</span>*)arg;
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img);
        <span class="hljs-built_in">free</span>(img);

        <span class="hljs-comment">// We have to chain another async call from this first callback</span>
        <span class="hljs-keyword">char</span>* str = <span class="hljs-built_in">calloc</span>(STR, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">char</span>));
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] download image 2\n"</span>, pthread_self());
        EventLoop_async(loop, (Task){
                        .cb = get_image,
                        .done_cb = done_get_image_2,
                        .data = str
                        });
}
</code></pre>
<p>And it is also very uncomfortable to track the allocation of dynamic memory in languages that do not support garbage collection.</p>
<pre><code class="lang-c"><span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">done_get_image_2</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        <span class="hljs-keyword">char</span>* img = (<span class="hljs-keyword">char</span>*)arg;
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] %s\n"</span>, pthread_self(), img);

        <span class="hljs-comment">// It is really difficult to track where this memory was allocated</span>
        <span class="hljs-built_in">free</span>(img);
}
</code></pre>
<p>Here is where the <strong>async/await</strong> ergonomics come in handy. Using them we can translate the callbacks chain into sequential code. That way it feels like we were writing synchronous code. Making it much easier for the reader to understand the logic of the program. And it also makes handling the deallocation of dynamic memory easier. This is specially valuable for languages like C or Zig, where there is no garbage collection. But we will see that in the last part of our series, where we will implement the async/await logic using all the pieces that we have been developing so far.</p>
]]></content:encoded></item><item><title><![CDATA[C async/await - Part 2]]></title><description><![CDATA[Now that we have our coroutines implementation ready, let's move on to the next concept required for the implementation of async/await, thread pools.
A thread pool is a concurrent programming concept that involves managing a group or pool of pre-init...]]></description><link>https://textmode.dev/c-async-await-part-2</link><guid isPermaLink="true">https://textmode.dev/c-async-await-part-2</guid><category><![CDATA[C]]></category><category><![CDATA[ThreadPools]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sat, 10 Feb 2024 09:49:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707559229452/4bbe9ac9-39fa-4260-b539-1e097858ef88.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Now that we have our <a target="_blank" href="https://terminalprogrammer.com/zig-for-c-programmers-asyncawait-part-1">coroutines</a> implementation ready, let's move on to the next concept required for the implementation of async/await, thread pools.</p>
<p>A thread pool is a concurrent programming concept that involves managing a group or pool of pre-initialized threads to efficiently execute tasks. The architecture of a thread pool typically consists of the following components:</p>
<ul>
<li><p><strong>Thread Pool Queue:</strong> The queue holds the tasks or jobs that need to be executed by the thread pool. When a task arrives, it is added to the queue, and an available thread from the pool picks it up for execution. This queue helps in managing the order of task execution and prevents overloading the system with too many active threads simultaneously.</p>
</li>
<li><p><strong>Worker Threads</strong>: Worker threads are pre-initialized threads kept in the pool, ready to execute tasks. They continuously check the task queue for new tasks to execute. Once a task is obtained from the queue, a worker thread processes it and becomes available for the next task.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707305653626/039c9b67-3f03-4a0e-aaf8-fe3ffc1c7840.png" alt class="image--center mx-auto" /></p>
<p>As many threads will be accessing the task queue in parallel for new tasks, we need to implement a mechanism that avoids data races. We will do that by implementing a thread safe queue. But first, we need to start with the implementation of a basic queue.</p>
<h1 id="heading-queue">Queue</h1>
<p>A queue is a fundamental data structure in computer science that follows the First-In-First-Out (FIFO) principle. In a queue, elements are added to the rear (enqueue) and removed from the front (dequeue). Imagine it as a line of people waiting for a service—those who arrive first are served first.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707219515965/417b8044-58a3-4632-be1a-39b5a6ecbc11.png" alt class="image--center mx-auto" /></p>
<p>A circular queue, also known as a ring buffer, is a variation of the basic queue data structure with a circular arrangement of elements in a fixed-size array. Unlike a traditional queue, where elements are added at one end and removed from the other, a circular queue reuses the space in the array, creating a circular pattern.</p>
<p>For example, if we have a queue of six elements of capacity, we push six items and then we pop out three of them, we'll end up having a queue with the following status:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707219540897/855dccde-0bd3-4d43-8c0c-7d820a745906.png" alt class="image--center mx-auto" /></p>
<p>If after that we push three more items into the queue, its status will be the following:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707219551332/8eb966f1-8f57-4c03-9083-e96c596d3e1f.png" alt class="image--center mx-auto" /></p>
<p>We will use a basic implementation of a circular queue as baseline for a thread safe queue.</p>
<pre><code class="lang-c"><span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Queue</span> {</span>
        <span class="hljs-keyword">size_t</span> item_size;
        <span class="hljs-keyword">size_t</span> front;
        <span class="hljs-keyword">size_t</span> capacity;
        <span class="hljs-keyword">size_t</span> count;
        <span class="hljs-keyword">void</span>*  items;
} Queue;

<span class="hljs-function">Queue* <span class="hljs-title">new_Queue</span><span class="hljs-params">(<span class="hljs-keyword">size_t</span> item_size, <span class="hljs-keyword">size_t</span> capacity)</span> </span>{
        <span class="hljs-keyword">if</span> (capacity &lt;= <span class="hljs-number">0</span>) {
                <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
        }

        Queue* q = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(Queue));
        q-&gt;item_size = item_size;
        q-&gt;capacity  = capacity;
        q-&gt;items     = <span class="hljs-built_in">calloc</span>(capacity, item_size);

        <span class="hljs-keyword">return</span> q;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_Queue</span><span class="hljs-params">(Queue* q)</span> </span>{
        <span class="hljs-built_in">free</span>(q-&gt;items);
        <span class="hljs-built_in">free</span>(q);
}

<span class="hljs-function"><span class="hljs-keyword">bool</span> <span class="hljs-title">Queue_push</span><span class="hljs-params">(Queue* q, <span class="hljs-keyword">void</span>* item)</span> </span>{
        <span class="hljs-keyword">if</span> (q-&gt;count == q-&gt;capacity) {
                <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
        <span class="hljs-keyword">char</span>* items = (<span class="hljs-keyword">char</span>*)q-&gt;items;
        <span class="hljs-keyword">size_t</span> index = (q-&gt;front + q-&gt;count) % q-&gt;capacity;
        <span class="hljs-keyword">char</span>* dst = &amp;items[index * q-&gt;item_size];
        <span class="hljs-built_in">memcpy</span>(dst, item, q-&gt;item_size);
        q-&gt;count++;
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
}

<span class="hljs-function"><span class="hljs-keyword">bool</span> <span class="hljs-title">Queue_pop</span><span class="hljs-params">(Queue* q, <span class="hljs-keyword">void</span>* item)</span> </span>{
        <span class="hljs-keyword">if</span> (q-&gt;count == <span class="hljs-number">0</span>) {
                <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
        <span class="hljs-keyword">char</span>* items = (<span class="hljs-keyword">char</span>*)q-&gt;items;
        <span class="hljs-keyword">size_t</span> index = q-&gt;front;
        <span class="hljs-keyword">char</span>* src = &amp;items[index * q-&gt;item_size];
        <span class="hljs-built_in">memcpy</span>(item, src, q-&gt;item_size);
        q-&gt;count--;
        q-&gt;front = (q-&gt;front + <span class="hljs-number">1</span>) % q-&gt;capacity;
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
}
</code></pre>
<h1 id="heading-thread-safe-queue">Thread Safe Queue</h1>
<p>A thread-safe queue is a data structure designed to be used in concurrent or multi-threaded environments where multiple threads may simultaneously access and modify the queue. The primary goal of a thread-safe queue is to ensure that operations such as enqueue (insertion) and dequeue (removal) can be performed safely without leading to race conditions, data corruption, or other synchronization issues. In our case, the main thread will be accessing the queue for pushing new tasks. Whereas the worker threads will access the queue for popping queued tasks.</p>
<p>Condition variables are synchronization primitives used in concurrent programming to coordinate the execution of threads. We will use them together with mutexes to implement a thread-safe queue. When the client tries to push a new item into the queue but there is no room available, the client will wait on a condition variable until it gets notified about the availability of a new slot for pushing the item. But to avoid keeping the client stuck in that call in case there is no room available in the queue for a long time, we will use a timed condition variable. So that the client can wake up after each second to allow them to decide if they want to keep waiting or not. The same happens on the other end of the queue. We will use a timed condition variable for popping out items from the queue.</p>
<pre><code class="lang-c"><span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">SafeQueue</span> {</span>
        <span class="hljs-keyword">pthread_mutex_t</span>  mutex;
        <span class="hljs-keyword">pthread_cond_t</span>   pop_cv;
        <span class="hljs-keyword">pthread_cond_t</span>   push_cv;
        Queue*           <span class="hljs-built_in">queue</span>;
} SafeQueue;

<span class="hljs-function">SafeQueue* <span class="hljs-title">new_SafeQueue</span><span class="hljs-params">(<span class="hljs-keyword">size_t</span> item_size, <span class="hljs-keyword">size_t</span> capacity)</span> </span>{
        SafeQueue* q = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(SafeQueue));

        pthread_mutex_init(&amp;q-&gt;mutex, <span class="hljs-literal">NULL</span>);
        pthread_cond_init(&amp;q-&gt;pop_cv, <span class="hljs-literal">NULL</span>);
        pthread_cond_init(&amp;q-&gt;push_cv, <span class="hljs-literal">NULL</span>);

        q-&gt;<span class="hljs-built_in">queue</span> = new_Queue(item_size, capacity);

        <span class="hljs-keyword">return</span> q;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_SafeQueue</span><span class="hljs-params">(SafeQueue* q)</span> </span>{
        delete_Queue(q-&gt;<span class="hljs-built_in">queue</span>);

        pthread_cond_destroy(&amp;q-&gt;push_cv);
        pthread_cond_destroy(&amp;q-&gt;pop_cv);
        pthread_mutex_destroy(&amp;q-&gt;mutex);

        <span class="hljs-built_in">free</span>(q);
}

<span class="hljs-function">struct timespec <span class="hljs-title">get_timeout</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">timespec</span> <span class="hljs-title">result</span>;</span>
        clock_gettime(CLOCK_REALTIME, &amp;result);
        result.tv_sec += <span class="hljs-number">1</span>;
        <span class="hljs-keyword">return</span> result;
}

<span class="hljs-function"><span class="hljs-keyword">bool</span> <span class="hljs-title">SafeQueue_push</span><span class="hljs-params">(SafeQueue* q, <span class="hljs-keyword">void</span>* item)</span> </span>{
        <span class="hljs-keyword">bool</span> result = <span class="hljs-literal">false</span>;

        pthread_mutex_lock(&amp;q-&gt;mutex);
        {
                result = Queue_push(q-&gt;<span class="hljs-built_in">queue</span>, item);
                <span class="hljs-keyword">if</span> (!result) {
                        <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">timespec</span> <span class="hljs-title">timeout</span> = <span class="hljs-title">get_timeout</span>();</span>
                        pthread_cond_timedwait(&amp;q-&gt;push_cv, &amp;q-&gt;mutex, &amp;timeout);
                }
                <span class="hljs-keyword">else</span> {
                        pthread_cond_signal(&amp;q-&gt;pop_cv);
                }
        }
        pthread_mutex_unlock(&amp;q-&gt;mutex);

        <span class="hljs-keyword">return</span> result;
}

<span class="hljs-function"><span class="hljs-keyword">bool</span> <span class="hljs-title">SafeQueue_pop</span><span class="hljs-params">(SafeQueue* q, <span class="hljs-keyword">void</span>* item)</span> </span>{
        <span class="hljs-keyword">bool</span> result = <span class="hljs-literal">false</span>;

        pthread_mutex_lock(&amp;q-&gt;mutex);
        {
                result = Queue_pop(q-&gt;<span class="hljs-built_in">queue</span>, item);
                <span class="hljs-keyword">if</span> (!result) {
                        <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">timespec</span> <span class="hljs-title">timeout</span> = <span class="hljs-title">get_timeout</span>();</span>
                        pthread_cond_timedwait(&amp;q-&gt;pop_cv, &amp;q-&gt;mutex, &amp;timeout);
                }
                <span class="hljs-keyword">else</span> {
                        pthread_cond_signal(&amp;q-&gt;push_cv);
                }
        }
        pthread_mutex_unlock(&amp;q-&gt;mutex);

        <span class="hljs-keyword">return</span> result;
}
</code></pre>
<h1 id="heading-atomic-bool">Atomic Bool</h1>
<p>In order to stop the worker threads from the thread pool, we need a mechanism that allow us to notify them without running into race conditions. We will be using an atomic Boolean variable for that. But as C does not have support for atomic values, we will implement one from scratch.</p>
<p>An atomic bool is a type of variable in concurrent programming that supports atomic (indivisible) operations. We will use an atomic bool to signal to the threads in the pool that they should stop their execution.</p>
<pre><code class="lang-c"><span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">AtomicBool</span> {</span>
        <span class="hljs-keyword">pthread_mutex_t</span> mutex;
        <span class="hljs-keyword">bool</span>            value;
} AtomicBool;

<span class="hljs-function">AtomicBool* <span class="hljs-title">new_AtomicBool</span><span class="hljs-params">()</span> </span>{
        AtomicBool* b = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(AtomicBool));
        pthread_mutex_init(&amp;b-&gt;mutex, <span class="hljs-literal">NULL</span>);
        <span class="hljs-keyword">return</span> b;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_AtomicBool</span><span class="hljs-params">(AtomicBool* b)</span> </span>{
        pthread_mutex_destroy(&amp;b-&gt;mutex);
        <span class="hljs-built_in">free</span>(b);
}

<span class="hljs-function"><span class="hljs-keyword">bool</span> <span class="hljs-title">AtomicBool_get</span><span class="hljs-params">(AtomicBool* b)</span> </span>{
        <span class="hljs-keyword">bool</span> result = <span class="hljs-literal">false</span>;
        pthread_mutex_lock(&amp;b-&gt;mutex);
        {
                result = b-&gt;value;
        }
        pthread_mutex_unlock(&amp;b-&gt;mutex);
        <span class="hljs-keyword">return</span> result;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">AtomicBool_set</span><span class="hljs-params">(AtomicBool* b, <span class="hljs-keyword">bool</span> value)</span> </span>{
        pthread_mutex_lock(&amp;b-&gt;mutex);
        {
                b-&gt;value = value;
        }
        pthread_mutex_unlock(&amp;b-&gt;mutex);
}
</code></pre>
<h1 id="heading-thread-pool">Thread Pool</h1>
<p>Now that we have all the components ready, we put them all together to implement a basic version of a thread pool.</p>
<pre><code class="lang-c"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> NUM_THREADS = <span class="hljs-number">4</span>;

<span class="hljs-keyword">static</span>
<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> QUEUE_SIZE = <span class="hljs-number">10</span>;

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">ThreadPool</span> {</span>
        SafeQueue*    <span class="hljs-built_in">queue</span>;
        <span class="hljs-keyword">pthread_t</span>*    threads;
        <span class="hljs-keyword">size_t</span>        nthreads;
        AtomicBool*   running;
} ThreadPool;

<span class="hljs-function"><span class="hljs-keyword">void</span>* <span class="hljs-title">ThreadPool_run</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* arg)</span> </span>{
        ThreadPool* p = (ThreadPool*) arg;

        <span class="hljs-keyword">while</span> (AtomicBool_get(p-&gt;running)) {
                Task task;
                <span class="hljs-keyword">if</span> (SafeQueue_pop(p-&gt;<span class="hljs-built_in">queue</span>, &amp;task)) {
                        task.cb(task.data);
                }
        }

        <span class="hljs-keyword">return</span> <span class="hljs-literal">NULL</span>;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_start</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">true</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_create(&amp;p-&gt;threads[i], <span class="hljs-literal">NULL</span>, ThreadPool_run, p);
        }
}

<span class="hljs-function">ThreadPool* <span class="hljs-title">new_ThreadPool</span><span class="hljs-params">()</span> </span>{
        ThreadPool* p = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(ThreadPool));

        p-&gt;nthreads = NUM_THREADS;
        p-&gt;<span class="hljs-built_in">queue</span>    = new_SafeQueue(<span class="hljs-keyword">sizeof</span>(Task), QUEUE_SIZE);
        p-&gt;threads  = <span class="hljs-built_in">calloc</span>(p-&gt;nthreads, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">pthread_t</span>));
        p-&gt;running = new_AtomicBool();

        ThreadPool_start(p);

        <span class="hljs-keyword">return</span> p;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_ThreadPool</span><span class="hljs-params">(ThreadPool* p)</span> </span>{
        AtomicBool_set(p-&gt;running, <span class="hljs-literal">false</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;p-&gt;nthreads; ++i) {
                pthread_join(p-&gt;threads[i], <span class="hljs-literal">NULL</span>);
        }
        <span class="hljs-built_in">free</span>(p-&gt;threads);

        delete_SafeQueue(p-&gt;<span class="hljs-built_in">queue</span>);
        delete_AtomicBool(p-&gt;running);

        <span class="hljs-built_in">free</span>(p);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">ThreadPool_submit</span><span class="hljs-params">(ThreadPool* p, Task task)</span> </span>{
        <span class="hljs-keyword">while</span>(!SafeQueue_push(p-&gt;<span class="hljs-built_in">queue</span>, &amp;task));
}
</code></pre>
<p>This pattern can be used for handling server requests. As each request is independent, each one of them can generate a response at is own pace. The problem is that in case the requests take up too much time to serve, newer clients may not have a response for a long period of time. That is what can be avoided using a green threads pattern, similar to the one provided by goroutines in Go. In that case, the runtime scheduler can reschedule the execution of goroutines, so none of them keeps waiting too much.</p>
<p>In the case of client code, we can use this pattern for executing tasks asynchronously. However, the limitation of this pattern is that in case several asynchronous tasks generate output for the client, as they are running independently without any synchronization we can end up having overlapping outputs coming out from multiple worker threads.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707305673887/f1af9206-ece3-4f09-b17a-f6903de8d485.png" alt class="image--center mx-auto" /></p>
<p>For example, let's write an example program that uses this thread pool to simulate several asynchronous calls that write their output into the console.</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdio.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdlib.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;string.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;unistd.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;pthread.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;time.h&gt;</span></span>

<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">"pool.h"</span></span>

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">download_img</span><span class="hljs-params">(<span class="hljs-keyword">void</span>* data)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] BEGIN download_img"</span>, pthread_self());

        sleep(<span class="hljs-number">1</span>);
        <span class="hljs-keyword">int</span> r = rand();
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"%d "</span>, r);

        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] END download_img\n"</span>, pthread_self());
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">say_hello</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%lu] Hello\n"</span>, pthread_self());
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
        srand(time(<span class="hljs-literal">NULL</span>));

        ThreadPool* pool = new_ThreadPool();
        {
                <span class="hljs-keyword">char</span>* line = <span class="hljs-literal">NULL</span>;
                <span class="hljs-keyword">size_t</span> len = <span class="hljs-number">0</span>;
                <span class="hljs-keyword">ssize_t</span> read;
                <span class="hljs-keyword">while</span> ((read = getline(&amp;line, &amp;len, <span class="hljs-built_in">stdin</span>)) != <span class="hljs-number">-1</span>) {
                        <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"quit"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                                <span class="hljs-keyword">break</span>;
                        } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"hello"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                                say_hello();
                        } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (read&gt;<span class="hljs-number">2</span> &amp;&amp; <span class="hljs-built_in">strncmp</span>(<span class="hljs-string">"download"</span>, line, read<span class="hljs-number">-1</span>) == <span class="hljs-number">0</span>) {
                                <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i=<span class="hljs-number">0</span>; i&lt;<span class="hljs-number">5</span>; ++i) {
                                        ThreadPool_submit(pool, (Task){
                                                        .cb = download_img,
                                                        .data = <span class="hljs-literal">NULL</span>
                                                        });
                                }
                        }
                }
                <span class="hljs-keyword">if</span> (line != <span class="hljs-literal">NULL</span>) {
                        <span class="hljs-built_in">free</span>(line);
                }
        }
        delete_ThreadPool(pool);

        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>As there is no synchronization between the worker threads execution, their output can overlap each other. So multiple executions of the download command may have different outputs.</p>
<pre><code class="lang-bash">&gt;clang -g -Iinclude cmd/main.c src/** -o bin/main -lpthread
&gt;bin/main
download
[140364231157504] BEGIN download_img[140364247942912] BEGIN download_img[140364239550208] BEGIN download_img[140364256335616] BEGIN download_img1677218124 [140364256335616] END download_img
[140364256335616] BEGIN download_img1290723833 [140364239550208] END download_img
1311847477 [140364247942912] END download_img
1998703620 [140364231157504] END download_img
1256452635 [140364256335616] END download_img

download
[140364256335616] BEGIN download_img[140364239550208] BEGIN download_img[140364247942912] BEGIN download_img[140364231157504] BEGIN download_img649066008 448770898 1852008977 [140364256335616] END download_img[140364256335616] BEGIN download_img[140364247942912] END download_img
501791773 [140364239550208] END download_img
[140364231157504] END download_img
1564908724 [140364256335616] END download_img
</code></pre>
<p>In order to avoid race conditions in the output generated by asynchronous tasks, we need to use the next piece of our async/await puzzle, event loops. We'll cover that in the next part of this series.</p>
]]></content:encoded></item><item><title><![CDATA[C async/await - Part 1]]></title><description><![CDATA[Introduction
One of the things that grabbed my attention the most while I was having a first look at the features provided by the Zig language was the async/await keywords (even though they have been finally discarded).
const net = @import("std").net...]]></description><link>https://textmode.dev/c-async-await-part-1</link><guid isPermaLink="true">https://textmode.dev/c-async-await-part-1</guid><category><![CDATA[C]]></category><category><![CDATA[asynchronous]]></category><category><![CDATA[async]]></category><category><![CDATA[async/await]]></category><category><![CDATA[coroutines]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Fri, 26 Jan 2024 15:42:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706272865394/700b5fc3-ae52-47bd-888f-b626740c5ee4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>One of the things that grabbed my attention the most while I was having a first look at the features provided by the Zig language was the <a target="_blank" href="https://zig.guide/async/async-await">async/await</a> keywords (even though they have been finally discarded).</p>
<pre><code class="lang-c"><span class="hljs-keyword">const</span> net = @<span class="hljs-keyword">import</span>(<span class="hljs-string">"std"</span>).net;

pub <span class="hljs-keyword">const</span> io_mode = .evented;

<span class="hljs-function">pub fn <span class="hljs-title">main</span><span class="hljs-params">()</span> !<span class="hljs-keyword">void</span> </span>{
    <span class="hljs-keyword">const</span> addr = <span class="hljs-keyword">try</span> net.Address.parseIp(<span class="hljs-string">"127.0.0.1"</span>, <span class="hljs-number">7000</span>);

    var sendFrame = async send_message(addr);
    <span class="hljs-comment">// ... do something else while</span>
    <span class="hljs-comment">//     the message is being sent ...</span>
    <span class="hljs-keyword">try</span> await sendFrame;
}

<span class="hljs-comment">// Note how the function definition doesn't require any static</span>
<span class="hljs-comment">// `async` marking. The compiler can deduce when a function is</span>
<span class="hljs-comment">// async based on its usage of `await`.</span>
<span class="hljs-function">fn <span class="hljs-title">send_message</span><span class="hljs-params">(addr: net.Address)</span> !<span class="hljs-keyword">void</span> </span>{
    <span class="hljs-comment">// We could also delay `await`ing for the connection</span>
    <span class="hljs-comment">// to be established, if we had something else we</span>
    <span class="hljs-comment">// wanted to do in the meantime.</span>
    var socket = <span class="hljs-keyword">try</span> net.tcpConnectToAddress(addr);
    defer socket.close();

    <span class="hljs-comment">// Using both await and async in the same statement</span>
    <span class="hljs-comment">// is unnecessary and non-idiomatic, but it shows</span>
    <span class="hljs-comment">// what's happening behind the scenes when `io_mode`</span>
    <span class="hljs-comment">// is `.evented`.</span>
    _ = <span class="hljs-keyword">try</span> await async socket.write(<span class="hljs-string">"Hello World!\n"</span>);
}
</code></pre>
<p>I had already seen them in other languages like C#, Python, Dart, Rust, etc. But as they never felt too appealing to me for writing server side code, I was not paying too much attention to that programming pattern. I prefer the concept of goroutines introduced by Golang much more for handling server requests, agreeing with what Loris Cro says on his awesome <a target="_blank" href="https://kristoff.it/blog/zig-colorblind-async-await/">article</a> about async/await. However, after digging a bit deeper into this subject, it is clear to me that using this pattern for writing client code is very convenient. You can express the logic of your application in a much clearer way. This is related to the infamous "<strong>callback hell</strong>" very well known in the javascript world. But once I understood the pros of this feature, I wanted to know how it works under the hood. The first time you approach this programming pattern, it is not completely straightforward to understand how it works, because it involves several complex concepts: coroutines, event loops, thread pools, etc. So I decided to develop a toy implementation of async/await in C. As this language does not provide any of those concepts out of the box, building each one of them from scratch will allow you to gather the knowledge required to truly understand how async/await works.</p>
<h1 id="heading-coroutines">Coroutines</h1>
<p>When you start looking for a definition of coroutines, you usually find resources that define them as "computer program components that generalize subroutines for non-preemptive multitasking by allowing execution to be suspended and resumed. They are also known as cooperative multitasking or cooperative threading." Easy, right? Coroutines are just <strong>resumable</strong> functions. Period. It just means that the execution of a coroutine can be suspended. At that point, the execution flow will come back to the caller function. And if that function resumes the coroutine, the execution flow will go back to the coroutine code where it was suspended.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706272090848/7cc01a58-7b0f-47da-b5d7-d241768d149a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706272777662/bbec8fe1-858e-4753-b490-ddda6ea58c2f.png" alt class="image--center mx-auto" /></p>
<p>How is this possible? When we execute a subroutine, we create a new <strong>function stack</strong> to store function parameters, local variables and result value. Once the function returns, its function stack is <strong>destroyed</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706100299085/868a6904-4631-4f68-a146-25e9cb940030.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706100313760/b33a3058-910e-4c15-bddc-5066c10fcaa1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706100326062/27b6ada2-7e1c-43c6-acad-dec1d6185039.png" alt class="image--center mx-auto" /></p>
<p>However, in the case of coroutines, as they can be suspended and resumed, we need to <strong>keep track</strong> of the caller and callee function stacks. That way we can switch between them when we suspend or resume the coroutine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706273002883/6bd8d5ba-8be6-4d06-bd41-e49e274a43da.png" alt class="image--center mx-auto" /></p>
<p>We can create a simple implementation of coroutines in C using the <strong>ucontext</strong> (user context) module. It provides you with the functions to create a user context (<strong>getcontext</strong> and <strong>makecontext</strong>), which includes the following information:</p>
<ul>
<li><p>the contents of the calling thread's machine registers</p>
</li>
<li><p>the signal mask</p>
</li>
<li><p>the current execution stack</p>
</li>
</ul>
<p>And the function to switch contexts (<strong>swapcontext</strong>). Keeping a reference to the caller user context and the callee user context on the coroutine object, we can switch back and forth between them when we suspend or resume the coroutine.</p>
<p>The suspend function of a coroutine is also referred to as <strong>yield</strong>, because it can produce an intermediate value that can be received on the caller function. So the coroutine can provide values to the caller function every time it suspends. But I prefer to stick to the suspend name, as it is clearer and it also matches the keyword used by the Zig language.</p>
<p>When we <strong>create</strong> a coroutine, we allocate the memory required for storing the context of the coroutine function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706271425472/20768b55-8af0-420d-8ecc-0252343e41bf.png" alt class="image--center mx-auto" /></p>
<p>The <strong>resume</strong> function will save the caller function context, and swap it with the context of the coroutine function. The execution flow will continue at the point of the coroutine function where it was suspended. If the coroutine function has not yet been suspended yet, the execution flow will continue at the starting point of the coroutine function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706271450339/c532a386-e85b-45e4-b630-9df1e77585d1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706271464071/bf87b187-b12a-4ffb-b382-c48f0e3773fc.png" alt class="image--center mx-auto" /></p>
<p>Similarly, the <strong>suspend</strong> function will save the callee function (coroutine function) context, and swap it with the context of the caller function. The execution flow will continue at the point of the caller function where it resumed the coroutine function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706271490023/f6fc41db-a1e6-4f1f-92cb-73b00142d38a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706271499432/1baac7f2-91a7-4371-800a-3062f780f87e.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-implementation">Implementation</h1>
<p>Let's code a basic implementation of a coroutine that is able to generate integer values.</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdio.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdlib.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdbool.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;ucontext.h&gt;</span></span>

<span class="hljs-keyword">typedef</span> <span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Coroutine</span> <span class="hljs-title">Coroutine</span>;</span>
<span class="hljs-function"><span class="hljs-keyword">typedef</span> <span class="hljs-title">int</span> <span class="hljs-params">(*CoroutineFn)</span><span class="hljs-params">(Coroutine*)</span></span>;

<span class="hljs-class"><span class="hljs-keyword">struct</span> <span class="hljs-title">Coroutine</span> {</span>
    CoroutineFn fn;
    <span class="hljs-keyword">ucontext_t</span>  caller_ctx;
    <span class="hljs-keyword">ucontext_t</span>  callee_ctx;
    <span class="hljs-keyword">int</span>         yield_value;
    <span class="hljs-keyword">bool</span>        finished;
};

<span class="hljs-function">Coroutine* <span class="hljs-title">new_Coroutine</span><span class="hljs-params">(CoroutineFn fn)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">Coroutine_resume</span><span class="hljs-params">(Coroutine* c)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">Coroutine_suspend</span><span class="hljs-params">(Coroutine* c, <span class="hljs-keyword">int</span> value)</span></span>;
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_Coroutine</span><span class="hljs-params">(Coroutine* c)</span></span>;

<span class="hljs-keyword">static</span>
<span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> default_stack_size = <span class="hljs-number">4096</span>;

<span class="hljs-function"><span class="hljs-keyword">static</span>
<span class="hljs-keyword">void</span> <span class="hljs-title">Coroutine_entry_point</span><span class="hljs-params">(Coroutine* c)</span> </span>{
    <span class="hljs-keyword">int</span> result = c-&gt;fn(c);
    c-&gt;finished = <span class="hljs-literal">true</span>;
    Coroutine_suspend(c, result);
}

<span class="hljs-function">Coroutine* <span class="hljs-title">new_Coroutine</span><span class="hljs-params">(CoroutineFn fn)</span> </span>{
    Coroutine* c = (Coroutine*)<span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, <span class="hljs-keyword">sizeof</span>(Coroutine));
    c-&gt;fn = fn;

    getcontext(&amp;c-&gt;callee_ctx);
    c-&gt;callee_ctx.uc_stack.ss_sp = <span class="hljs-built_in">calloc</span>(<span class="hljs-number">1</span>, default_stack_size);
    c-&gt;callee_ctx.uc_stack.ss_size = default_stack_size;
    c-&gt;callee_ctx.uc_link = <span class="hljs-number">0</span>;
    makecontext(&amp;c-&gt;callee_ctx, (<span class="hljs-keyword">void</span> (*)())Coroutine_entry_point, <span class="hljs-number">1</span>, c);

    <span class="hljs-keyword">return</span> c;
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">Coroutine_resume</span><span class="hljs-params">(Coroutine* c)</span> </span>{
    <span class="hljs-keyword">if</span> (c-&gt;finished) <span class="hljs-keyword">return</span> <span class="hljs-number">-1</span>;
    swapcontext(&amp;c-&gt;caller_ctx, &amp;c-&gt;callee_ctx);
    <span class="hljs-keyword">return</span> c-&gt;yield_value;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">Coroutine_suspend</span><span class="hljs-params">(Coroutine* c, <span class="hljs-keyword">int</span> value)</span> </span>{
    c-&gt;yield_value = value;
    swapcontext(&amp;c-&gt;callee_ctx, &amp;c-&gt;caller_ctx);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">delete_Coroutine</span><span class="hljs-params">(Coroutine* c)</span> </span>{
    <span class="hljs-built_in">free</span>(c-&gt;callee_ctx.uc_stack.ss_sp);
    <span class="hljs-built_in">free</span>(c);
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">giveMeTwo</span><span class="hljs-params">(Coroutine* c)</span> </span>{
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] suspend 1\n"</span>, __func__);
        Coroutine_suspend(c, <span class="hljs-number">1</span>);
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] after suspend\n"</span>, __func__);
        <span class="hljs-keyword">return</span> <span class="hljs-number">2</span>;
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
        Coroutine* c = new_Coroutine(giveMeTwo);
        {
                <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] resume\n"</span>, __func__);
                <span class="hljs-keyword">int</span> a = Coroutine_resume(c);       <span class="hljs-comment">// a == 1</span>
                <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] after resume a: %d\n"</span>, __func__, a);
                <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] resume\n"</span>, __func__);
                <span class="hljs-keyword">int</span> b = Coroutine_resume(c);       <span class="hljs-comment">// b == 2</span>
                <span class="hljs-built_in">printf</span>(<span class="hljs-string">"[%s] after resume b: %d\n"</span>, __func__, b);
        }
        delete_Coroutine(c);
        <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>The output of this example is as follows:</p>
<pre><code class="lang-c">[main] resume
[giveMeTwo] suspend <span class="hljs-number">1</span>
[main] after resume a: <span class="hljs-number">1</span>
[main] resume
[giveMeTwo] after suspend
[main] after resume b: <span class="hljs-number">2</span>
</code></pre>
<p>So far so good. But we want to use coroutines for providing asynchronous functionality. This means that we need to relate them to threads somehow. But how can we mix this functionality with threads? We can use suspend and resume calls as <strong>synchronization points</strong> between multiple threads communicating through safe-thread queues. Not bad, right? We will continue our investigation about how async/await works by implementing a basic thread pool on the Part2 of this series. Stay tunned!</p>
]]></content:encoded></item><item><title><![CDATA[Neovim setup for Zig]]></title><description><![CDATA[Zig is an open-source, statically-typed programming language designed with a focus on simplicity, performance, and safety. Created by Andrew Kelley, Zig aims to provide a modern alternative to existing programming languages, offering low-level contro...]]></description><link>https://textmode.dev/neovim-setup-for-zig</link><guid isPermaLink="true">https://textmode.dev/neovim-setup-for-zig</guid><category><![CDATA[zig]]></category><category><![CDATA[neovim]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Mon, 18 Dec 2023 13:09:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1702904906049/3398ef6f-54bb-4e3c-b7e1-beeee9b4f575.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Zig is an open-source, statically-typed programming language designed with a focus on simplicity, performance, and safety. Created by Andrew Kelley, Zig aims to provide a modern alternative to existing programming languages, offering low-level control without sacrificing developer convenience.</p>
<p>I first learned about Zig just a few months ago. However, with the recent surge of languages attempting to replace C/C++, such as Rust, Carbon, Nim, and others, I initially held a degree of skepticism toward yet another contender. It seemed like another attempt to replace the longstanding C/C++. Nevertheless, as I observed Zig gaining traction in the indie game development scene, my curiosity was piqued, prompting me to explore it further. The more I delve into its features, the more I find myself drawn to it. This growing affinity led me to take the step of setting up a development environment on Windows, eager to give Zig a chance and test how it feels to program in this language.</p>
<h1 id="heading-zig-installation">Zig installation</h1>
<p>First, we are going to install manually Zig by downloading a release for Windows from the official website:</p>
<ol>
<li><p>Go to <a target="_blank" href="https://ziglang.org/download/">Download ⚡ Zig Programming Language (ziglang.org)</a></p>
</li>
<li><p>Download the release for Windows (e.g.: zig-windows-x86_64-0.12.0-dev.1819+5c1428ea9.zip)</p>
</li>
<li><p>Unzip the file in the folder that you want (e.g.: C:\Users\&lt;username&gt;\zig)</p>
</li>
</ol>
<p>After that, we need to add the path of the unzipped folder (i.e.: C:\Users\&lt;username&gt;\zig\zig-windows-x86_64-0.12.0-dev.1819+5c1428ea9) to the <strong>PATH</strong> environment variable, so that we can run Zig from any location.</p>
<h2 id="heading-test-installation">Test installation</h2>
<p>Now that we have Zig installed, we can check if it works properly. The <strong>zig init</strong> command sets up the basic structure and configuration files needed for a Zig project. Open a Terminal and type the following commands:</p>
<pre><code class="lang-powershell">mkdir zig_hello_world
<span class="hljs-built_in">cd</span> zig_hello_world
zig init
</code></pre>
<p>This command creates an example program containing a basic main function that prints a message, and a unit test.</p>
<pre><code class="lang-c"><span class="hljs-keyword">const</span> <span class="hljs-built_in">std</span> = @<span class="hljs-keyword">import</span>(<span class="hljs-string">"std"</span>);

<span class="hljs-function">pub fn <span class="hljs-title">main</span><span class="hljs-params">()</span> !<span class="hljs-keyword">void</span> </span>{
    <span class="hljs-comment">// Prints to stderr (it's a shortcut based on `std.io.getStdErr()`)</span>
    <span class="hljs-built_in">std</span>.debug.print(<span class="hljs-string">"All your {s} are belong to us.\n"</span>, .{<span class="hljs-string">"codebase"</span>});

    <span class="hljs-comment">// stdout is for the actual output of your application, for example if you</span>
    <span class="hljs-comment">// are implementing gzip, then only the compressed bytes should be sent to</span>
    <span class="hljs-comment">// stdout, not any debugging messages.</span>
    <span class="hljs-keyword">const</span> stdout_file = <span class="hljs-built_in">std</span>.io.getStdOut().writer();
    var bw = <span class="hljs-built_in">std</span>.io.bufferedWriter(stdout_file);
    <span class="hljs-keyword">const</span> <span class="hljs-built_in">stdout</span> = bw.writer();

    <span class="hljs-keyword">try</span> <span class="hljs-built_in">stdout</span>.print(<span class="hljs-string">"Run `zig build test` to run the tests.\n"</span>, .{});

    <span class="hljs-keyword">try</span> bw.flush(); <span class="hljs-comment">// don't forget to flush!</span>
}

test <span class="hljs-string">"simple test"</span> {
    var <span class="hljs-built_in">list</span> = <span class="hljs-built_in">std</span>.ArrayList(i32).init(<span class="hljs-built_in">std</span>.testing.allocator);
    defer <span class="hljs-built_in">list</span>.deinit(); <span class="hljs-comment">// try commenting this out and see if zig detects the memory leak!</span>
    <span class="hljs-keyword">try</span> <span class="hljs-built_in">list</span>.append(<span class="hljs-number">42</span>);
    <span class="hljs-keyword">try</span> <span class="hljs-built_in">std</span>.testing.expectEqual(@as(i32, <span class="hljs-number">42</span>), <span class="hljs-built_in">list</span>.pop());
}
</code></pre>
<p>We can execute the main function by running the command <strong>zig build run</strong>:</p>
<pre><code class="lang-powershell">zig build run
All your codebase are belong to us.
Run `zig build test` to run the tests.
</code></pre>
<p>To run the unit test, we just have to run the command <strong>zig build test</strong>, which will not print anything. But we can check that it works properly by modifying the value to be expected in the unit test.</p>
<pre><code class="lang-powershell">zig build test
</code></pre>
<h1 id="heading-neovim">Neovim</h1>
<h2 id="heading-language-server">Language Server</h2>
<p>To install the Zig Language Server we can download the Windows version from the following website:</p>
<ul>
<li><a target="_blank" href="https://github.com/zigtools/zls/wiki/Installation#install-zls">Zig Language Server</a></li>
</ul>
<p>After downloading it, we have to copy the <strong>zls.exe</strong> file to the <strong>path</strong> where we have installed Zig (i.e.: C:\Users\&lt;username&gt;\zig\zig-windows-x86_64-0.12.0-dev.1819+5c1428ea9).</p>
<p>Then, we have to configure the language server in our Neovim configuration file. I prefer to use a local configuration file (i.e.: <strong>.nvim.lua</strong>), instead of the global Neovim configuration files as I explain in my <a target="_blank" href="https://t.co/CSEJEmUrU2">book about Neovim</a>.</p>
<p><a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765790582088/ca3a4ab5-125b-4383-a8ec-3c0b524d23ff.png" alt class="image--center mx-auto" /></a></p>
<p>There I also explain how to configure the main actions of <strong>lspconfig</strong> in Neovim.</p>
<pre><code class="lang-c">require <span class="hljs-string">'lspconfig'</span>.zls.setup{}
</code></pre>
<p>We can check that it works properly by opening the <strong>main.zig</strong> function, and showing the details of a function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702853862760/93b59dba-339b-479e-9ccd-f53ffe02d868.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-debugger">Debugger</h2>
<h3 id="heading-llvm">LLVM</h3>
<p>To debug Zig code we will use <strong>LLDB</strong>, which is included in the LLVM installation. We can download the LLVM version for Windows (e.g.: LLVM-16.0.6-win64.exe) from the following website:</p>
<ul>
<li><a target="_blank" href="https://github.com/llvm/llvm-project/releases">LLVM</a></li>
</ul>
<h3 id="heading-python">Python</h3>
<p>Once we have LLVM installed, we need to install the Python version required by that LLVM version. We can check that by running LLDB from a terminal. It will show an error dialog complaining of the missing Python DLL (e.g.: python310.dll not found). For the LLVM-16.0.6 version, we need to install the Python version 3.10.x (e.g.: python-3.10.10-amd64.exe). We can download it from the website:</p>
<ul>
<li><a target="_blank" href="https://www.python.org/downloads/windows/">Python</a></li>
</ul>
<p>Finally, we need to define the following <strong>environment variable</strong>:</p>
<pre><code class="lang-powershell">LLDB_USE_NATIVE_PDB_READER=<span class="hljs-string">"yes"</span>
</code></pre>
<p>After that, we can check <strong>lldb</strong> from a terminal and debug the executable coming out from the example program:</p>
<pre><code class="lang-c">PS C:\Users\&lt;username&gt;\zig_hello_world&gt; lldb.exe .\zig-out\bin\zig_hello_world.exe
(lldb) target create <span class="hljs-string">".\\zig-out\\bin\\hello_world.exe"</span>
Current executable <span class="hljs-built_in">set</span> to <span class="hljs-string">'C:\Users\&lt;username&gt;\zig_hello_world\zig-out\bin\zig_hello_world.exe'</span> (x86_64).
(lldb) (lldb) b main
Breakpoint <span class="hljs-number">1</span>: where = zig_hello_world.exe`main + <span class="hljs-number">26</span> at main.zig:<span class="hljs-number">5</span>, address = <span class="hljs-number">0x00000001400013fa</span>
(lldb) r
(lldb) Process <span class="hljs-number">28096</span> launched: <span class="hljs-string">'C:\Users\&lt;username&gt;\zig_hello_world\zig-out\bin\zig_hello_world.exe'</span> (x86_64)
Process <span class="hljs-number">28096</span> stopped
* thread #<span class="hljs-number">1</span>, stop reason = breakpoint <span class="hljs-number">1.1</span>
    frame #<span class="hljs-number">0</span>: <span class="hljs-number">0x00007ff7d46b13fa</span> zig_hello_world.exe`main at main.zig:<span class="hljs-number">5</span>
   <span class="hljs-number">2</span>
   <span class="hljs-number">3</span>    pub fn main() !<span class="hljs-keyword">void</span> {
   <span class="hljs-number">4</span>        <span class="hljs-comment">// Prints to stderr (it's a shortcut based on `std.io.getStdErr()`)</span>
-&gt; <span class="hljs-number">5</span>        <span class="hljs-built_in">std</span>.debug.print(<span class="hljs-string">"All your {s} are belong to us.\n"</span>, .{<span class="hljs-string">"codebase"</span>});
   <span class="hljs-number">6</span>
   <span class="hljs-number">7</span>        <span class="hljs-comment">// stdout is for the actual output of your application, for example if you</span>
   <span class="hljs-number">8</span>        <span class="hljs-comment">// are implementing gzip, then only the compressed bytes should be sent to</span>
(lldb)
</code></pre>
<p>As Neovim supports the Debug Adapter Protocol (<strong>DAP</strong>), we can debug directly from Neovim. To configure Zig debugging in Neovim we can use the <strong>nvim-dap</strong> plugin. The required configuration for Zig that we need to add to our local Neovim configuration files (i.e.: <strong>.nvim.lua</strong>) is the following:</p>
<pre><code class="lang-c">local dap = require(<span class="hljs-string">'dap'</span>)
dap.adapters.lldb = {
  type = <span class="hljs-string">'executable'</span>,
  command = <span class="hljs-string">'C:\\Program Files\\LLVM\\bin\\lldb-vscode.exe'</span>, -- adjust as needed, must be absolute path
  name = <span class="hljs-string">'lldb'</span>
}

dap.configurations.zig = {
  {
    name = <span class="hljs-string">'Launch'</span>,
    type = <span class="hljs-string">'lldb'</span>,
    request = <span class="hljs-string">'launch'</span>,
    program = <span class="hljs-string">'${workspaceFolder}/zig-out/bin/zig_hello_world.exe'</span>,
    cwd = <span class="hljs-string">'${workspaceFolder}'</span>,
    stopOnEntry = <span class="hljs-literal">false</span>,
    args = {},
  },
}
</code></pre>
<p>Then, you can start a debugging session from Neovim, and move around the source code while inspecting the execution flow of the program.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702903975549/61004b79-2d3d-46ab-89d1-572e35ff1374.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-bonus-vscode-setup">Bonus: VScode setup</h1>
<p>If you don't feel comfortable working with a text editor like Neovim, I'm going to give you the instructions to configure VScode to work with Zig as VScode is one of the most commonly used IDEs nowadays. To add support for Zig, you just need to install the <strong>Zig extension</strong> from the VScode marketplace.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702852119229/feaee2be-0aab-4253-af3f-ae1934b867a3.png" alt class="image--center mx-auto" /></p>
<p>Now that we have the Zig extension installed, we can open the folder of the example project that we created previously (i.e.: zig_hello_world) from VScode. When we open the <strong>main.zig</strong> file, VScode will ask us to install <strong>Zig</strong> and the <strong>Zig Language Server</strong>. It will install them under the <strong>local</strong> VScode directory (e.g.: c:\Users\&lt;username&gt;\AppData\Roaming\Code\User\globalStorage\ziglang.vscode-zig\zig_install\zig.exe).</p>
<p>Once we have the Zig Language Server installed, we can test it by opening the <strong>main.zig</strong> file and placing the cursor over one of the functions (e.g.: std.debug.print). It will pop up a dialog showing details about the selected function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702852734571/8c89a5bb-9340-4974-8ad4-538edb81a4d8.png" alt class="image--center mx-auto" /></p>
<p>Lastly, we can configure the <strong>debugger</strong> for Zig by creating the following <strong>launch.json</strong> file:</p>
<pre><code class="lang-json">{
    <span class="hljs-comment">// Use IntelliSense to learn about possible attributes.</span>
    <span class="hljs-comment">// Hover to view descriptions of existing attributes.</span>
    <span class="hljs-comment">// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387</span>
    <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0.2.0"</span>,
    <span class="hljs-attr">"configurations"</span>: [
        {
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Debug"</span>,
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"cppvsdbg"</span>,
            <span class="hljs-attr">"request"</span>: <span class="hljs-string">"launch"</span>,
            <span class="hljs-attr">"program"</span>: <span class="hljs-string">"${workspaceFolder}/zig-out/bin/zig_hello_world.exe"</span>,
            <span class="hljs-attr">"cwd"</span>: <span class="hljs-string">"${workspaceFolder}"</span>,
        },
    ]
}
</code></pre>
<p>After that, if we place a breakpoint on one of the lines of the <strong>main.zig</strong> file and press <strong>F5</strong> to start a debugging session, we will see how the program execution stops at the breakpoint:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702852966590/1186fc92-49fb-436b-b99c-abbebc5219ec.png" alt class="image--center mx-auto" /></p>
<p>And that's all. I hope this post helps you set up a development environment for Zig so that you can give this language a try. It is promising, especially for game development, where I've seen several indie games and engines already implemented using Zig. In the case of Linux, the setup is pretty similar to this one. You just need to pick the versions corresponding to Linux for each product, but the steps would be pretty much the same.</p>
<p>See you in the next post!</p>
]]></content:encoded></item><item><title><![CDATA[How to implement functional programming principles in C]]></title><description><![CDATA[Haskell is often considered a paradigmatic functional programming language for several reasons:

Purity and Immutability:

Haskell is a purely functional language, which means that functions in Haskell are referentially transparent and side-effect-fr...]]></description><link>https://textmode.dev/how-to-implement-functional-programming-principles-in-c</link><guid isPermaLink="true">https://textmode.dev/how-to-implement-functional-programming-principles-in-c</guid><category><![CDATA[C]]></category><category><![CDATA[Haskell]]></category><category><![CDATA[Functional Programming]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Thu, 30 Nov 2023 11:20:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701343102270/00bb1945-096d-4443-b745-9a5f2c0bbda7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Haskell is often considered a paradigmatic functional programming language for several reasons:</p>
<ol>
<li><p><strong>Purity and Immutability:</strong></p>
<ul>
<li><p>Haskell is a purely functional language, which means that functions in Haskell are referentially transparent and side-effect-free. This purity ensures that the result of a function depends only on its inputs, promoting clarity and reasoning about code.</p>
</li>
<li><p>Immutability is enforced by default in Haskell. Once a value is bound to a name, it cannot be changed. This immutability simplifies reasoning about program state and helps avoid bugs related to mutable data.</p>
</li>
</ul>
</li>
<li><p><strong>Lazy Evaluation:</strong></p>
<ul>
<li><p>Haskell employs lazy evaluation, where expressions are not evaluated until their values are actually needed. This enables the creation of infinite data structures and allows for more modular and composable code.</p>
</li>
<li><p>Lazy evaluation helps avoid unnecessary computations, leading to potentially more efficient and expressive programs.</p>
</li>
</ul>
</li>
<li><p><strong>Type System and Type Inference:</strong></p>
<ul>
<li><p>Haskell has a strong, static type system that is based on Hindley-Milner type inference. The type system helps catch errors at compile time, providing a high level of safety.</p>
</li>
<li><p>Haskell's type system also supports polymorphism, type classes, and algebraic data types, allowing for expressive and concise code.</p>
</li>
</ul>
</li>
<li><p><strong>Higher-Order Functions and First-Class Functions:</strong></p>
<ul>
<li><p>Haskell treats functions as first-class citizens, meaning functions can be passed as arguments to other functions, returned as values, and stored in data structures. This enables the use of higher-order functions, facilitating functional programming patterns.</p>
</li>
<li><p>Higher-order functions in Haskell promote code that is concise, modular, and expressive.</p>
</li>
</ul>
</li>
<li><p><strong>Pattern Matching and Algebraic Data Types:</strong></p>
<ul>
<li><p>Pattern matching in Haskell allows for concise and readable code when working with algebraic data types. This feature makes it easy to destructure and process complex data structures.</p>
</li>
<li><p>Algebraic data types, including sum types (enums) and product types (structs), provide a powerful mechanism for modeling data in a clear and extensible way.</p>
</li>
</ul>
</li>
<li><p><strong>Monads and Monadic I/O:</strong></p>
<ul>
<li><p>Haskell introduced the concept of monads to handle side effects in a pure functional setting. Monads allow sequencing of computations while maintaining referential transparency.</p>
</li>
<li><p>The use of monads in Haskell enables a clean and principled approach to handling input/output operations in a functional language.</p>
</li>
</ul>
</li>
<li><p><strong>Expressive Type Classes:</strong></p>
<ul>
<li><p>Haskell's type classes allow ad-hoc polymorphism, enabling the definition of common operations for various types. This leads to more generic and reusable code.</p>
</li>
<li><p>The use of type classes in Haskell is a powerful mechanism for creating abstract interfaces and promoting code that is both expressive and modular.</p>
</li>
</ul>
</li>
</ol>
<p>The combination of these features makes Haskell a paradigmatic functional programming language. It serves as a reference for functional programming principles and has influenced the development of other functional languages. Haskell's design choices encourage developers to adopt a functional programming mindset, leading to code that is often concise, elegant, and maintainable.</p>
<p>Here's an example that demonstrates the principles of recursion, higher-order functions, immutability, and referential transparency:</p>
<pre><code class="lang-haskell"><span class="hljs-comment">-- Example of recursion</span>
<span class="hljs-title">factorial</span> :: <span class="hljs-type">Integer</span> -&gt; <span class="hljs-type">Integer</span>
<span class="hljs-title">factorial</span> <span class="hljs-number">0</span> = <span class="hljs-number">1</span>
<span class="hljs-title">factorial</span> n = n * factorial (n - <span class="hljs-number">1</span>)

<span class="hljs-comment">-- Example of a higher-order function (map)</span>
<span class="hljs-title">multiplyByTwo</span> :: [<span class="hljs-type">Integer</span>] -&gt; [<span class="hljs-type">Integer</span>]
<span class="hljs-title">multiplyByTwo</span> = map (*<span class="hljs-number">2</span>)

<span class="hljs-comment">-- Example of immutability</span>
<span class="hljs-title">originalList</span> :: [<span class="hljs-type">Integer</span>]
<span class="hljs-title">originalList</span> = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>]

<span class="hljs-comment">-- Applying a higher-order function to the original list</span>
<span class="hljs-title">doubledList</span> :: [<span class="hljs-type">Integer</span>]
<span class="hljs-title">doubledList</span> = multiplyByTwo originalList

<span class="hljs-comment">-- Referentially transparent function</span>
<span class="hljs-title">addTwo</span> :: <span class="hljs-type">Integer</span> -&gt; <span class="hljs-type">Integer</span>
<span class="hljs-title">addTwo</span> x = x + <span class="hljs-number">2</span>

<span class="hljs-title">main</span> :: <span class="hljs-type">IO</span> ()
<span class="hljs-title">main</span> = <span class="hljs-keyword">do</span>
  <span class="hljs-comment">-- Example of immutability</span>
  <span class="hljs-keyword">let</span> originalValue = <span class="hljs-number">5</span>
  putStrLn $ <span class="hljs-string">"Original value: "</span> ++ show originalValue

  <span class="hljs-comment">-- Referentially transparent function</span>
  <span class="hljs-keyword">let</span> doubledValue = addTwo originalValue
  putStrLn $ <span class="hljs-string">"Doubled value: "</span> ++ show doubledValue

  <span class="hljs-comment">-- Example of recursion</span>
  <span class="hljs-keyword">let</span> result = factorial <span class="hljs-number">5</span>
  putStrLn $ <span class="hljs-string">"Factorial of 5: "</span> ++ show result

  <span class="hljs-comment">-- Example of a higher-order function (map)</span>
  putStrLn <span class="hljs-string">"Original list: "</span> ++ show originalList
  putStrLn $ <span class="hljs-string">"Doubled list: "</span> ++ show doubledList
</code></pre>
<p>In this Haskell example:</p>
<ul>
<li><p><strong>Recursion:</strong> The <code>factorial</code> function calculates the factorial of a number using recursion.</p>
</li>
<li><p><strong>Higher-order function (</strong><code>map</code><strong>):</strong> The <code>multiplyByTwo</code> function is defined using the <code>map</code> higher-order function, which multiplies each element of a list by 2.</p>
</li>
<li><p><strong>Immutability:</strong> In Haskell, variables are immutable. The <code>originalList</code> and <code>doubledList</code> values are bound to names, and once assigned, their values do not change.</p>
</li>
<li><p><strong>Referential Transparency:</strong> The <code>addTwo</code> function is referentially transparent. Given the same input, it will always produce the same output.</p>
</li>
</ul>
<p>Note: Haskell uses a lazy evaluation strategy, which means that values are only computed when needed. This can have a profound impact on how functions are evaluated and how recursion is handled.</p>
<p>In C, achieving full functional programming principles such as immutability and referential transparency can be challenging due to the mutable nature of variables. However, you can still demonstrate functional programming concepts like recursion and higher-order functions. Here's a simple example that showcases recursion and a basic form of a higher-order function in C:</p>
<pre><code class="lang-c"><span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;stdio.h&gt;</span></span>

<span class="hljs-comment">// Example of a higher-order function</span>
<span class="hljs-function"><span class="hljs-keyword">typedef</span> <span class="hljs-title">int</span> <span class="hljs-params">(*UnaryOperation)</span><span class="hljs-params">(<span class="hljs-keyword">int</span>)</span></span>;

<span class="hljs-comment">// Higher-order function that applies a unary operation function to each element of an array</span>
<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">map</span><span class="hljs-params">(<span class="hljs-keyword">int</span>* <span class="hljs-built_in">array</span>, <span class="hljs-keyword">size_t</span> size, UnaryOperation op)</span> </span>{
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">size_t</span> i = <span class="hljs-number">0</span>; i &lt; size; ++i) {
        <span class="hljs-built_in">array</span>[i] = op(<span class="hljs-built_in">array</span>[i]);
    }
}

<span class="hljs-comment">// Example of a recursive function</span>
<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">factorial</span><span class="hljs-params">(<span class="hljs-keyword">int</span> n)</span> </span>{
    <span class="hljs-keyword">if</span> (n == <span class="hljs-number">0</span> || n == <span class="hljs-number">1</span>) {
        <span class="hljs-keyword">return</span> <span class="hljs-number">1</span>;
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> n * factorial(n - <span class="hljs-number">1</span>);
    }
}

<span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">main</span><span class="hljs-params">()</span> </span>{
    <span class="hljs-comment">// Example of immutability (const) --------------------------------</span>
    <span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> originalValue = <span class="hljs-number">5</span>;
    <span class="hljs-built_in">printf</span>(<span class="hljs-string">"Original value: %d\n"</span>, originalValue);

    <span class="hljs-comment">// Example of referential transparency (pure function) ------------</span>
    <span class="hljs-keyword">int</span> doubledValue = originalValue * <span class="hljs-number">2</span>;
    <span class="hljs-built_in">printf</span>(<span class="hljs-string">"Doubled value: %d\n"</span>, doubledValue);

    <span class="hljs-comment">// Example of recursion -------------------------------------------</span>
    <span class="hljs-keyword">int</span> result = factorial(<span class="hljs-number">5</span>);
    <span class="hljs-built_in">printf</span>(<span class="hljs-string">"Factorial of 5: %d\n"</span>, result);

    <span class="hljs-comment">// Example of a higher-order function (map) -----------------------</span>
    <span class="hljs-keyword">int</span> numbers[] = {<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>};
    <span class="hljs-keyword">size_t</span> arraySize = <span class="hljs-keyword">sizeof</span>(numbers) / <span class="hljs-keyword">sizeof</span>(numbers[<span class="hljs-number">0</span>]);

    <span class="hljs-comment">// Define a unary operation function (double)</span>
    <span class="hljs-function"><span class="hljs-keyword">int</span> <span class="hljs-title">doubleOperation</span><span class="hljs-params">(<span class="hljs-keyword">int</span> x)</span> </span>{
        <span class="hljs-keyword">return</span> x * <span class="hljs-number">2</span>;
    }

    <span class="hljs-comment">// Apply the double operation using the map function</span>
    <span class="hljs-built_in">map</span>(numbers, arraySize, doubleOperation);

    <span class="hljs-comment">// Print the modified array</span>
    <span class="hljs-built_in">printf</span>(<span class="hljs-string">"Doubled array: "</span>);
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">size_t</span> i = <span class="hljs-number">0</span>; i &lt; arraySize; ++i) {
        <span class="hljs-built_in">printf</span>(<span class="hljs-string">"%d "</span>, numbers[i]);
    }
    <span class="hljs-built_in">printf</span>(<span class="hljs-string">"\n"</span>);

    <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>;
}
</code></pre>
<p>In this example:</p>
<ul>
<li><p><strong>Immutability:</strong> The <code>const</code> keyword is used to declare a constant (<code>originalValue</code>), demonstrating a form of immutability. However, note that C does not enforce immutability in the same way as functional languages.</p>
</li>
<li><p><strong>Referential Transparency:</strong> The expression <code>originalValue * 2</code> is referentially transparent, meaning it will always produce the same result for the same inputs.</p>
</li>
<li><p><strong>Recursion:</strong> The <code>factorial</code> function is a recursive function that calculates the factorial of a number.</p>
</li>
<li><p><strong>Higher-order function (</strong><code>map</code><strong>):</strong> The <code>map</code> function takes an array, its size, and a unary operation function (<code>op</code>). It applies the unary operation to each element of the array, demonstrating a basic form of a higher-order function.</p>
</li>
</ul>
<p>While this example incorporates some functional programming principles, it's important to note that C is not a purely functional language, and achieving true immutability and referential transparency might require a different programming paradigm.</p>
]]></content:encoded></item><item><title><![CDATA[How to use the Windows Search Index from PowerShell]]></title><description><![CDATA[Windows Index Search, also known as Windows Search or Windows Indexing, is a feature in the Microsoft Windows operating system that improves the speed and efficiency of file searches on your computer. It is designed to create and maintain an index of...]]></description><link>https://textmode.dev/how-to-use-the-windows-search-index-from-powershell</link><guid isPermaLink="true">https://textmode.dev/how-to-use-the-windows-search-index-from-powershell</guid><category><![CDATA[Powershell]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 19 Nov 2023 22:56:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699883686688/312df96c-3787-47e3-a5cc-eb378f3e3499.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Windows Index Search, also known as Windows Search or Windows Indexing, is a feature in the Microsoft Windows operating system that improves the speed and efficiency of file searches on your computer. It is designed to create and maintain an index of the files and folders stored on your local drives, allowing for faster and more accurate searches.</p>
<p>When you perform a search using Windows Index Search, the system does not have to scan every file and folder on your computer every time you search. Instead, it refers to the index, which is a database containing information about the files' names, locations, and contents. This significantly reduces the time it takes to find and retrieve search results, especially when dealing with a large number of files.</p>
<p>Key Features and Benefits of Windows Index Search:</p>
<ul>
<li><p><strong>Fast Search Results</strong>: With the index pre-built, Windows Search can quickly return search results, making it easier to find the files and documents you need.</p>
</li>
<li><p><strong>Support for Various File Types</strong>: Windows Index Search supports various file types, including documents, images, music, videos, and more.</p>
</li>
<li><p><strong>Partial and Phrasal Search</strong>: It allows partial word matches and supports searching for phrases, making it more flexible and powerful.</p>
</li>
<li><p><strong>Outlook Integration</strong>: The Windows Search index can also be used by Microsoft Outlook to speed up email searches.</p>
</li>
<li><p><strong>Customization</strong>: Users can customize which folders and locations are indexed to include or exclude specific locations based on their preferences.</p>
</li>
<li><p><strong>Advanced Search Operators</strong>: Windows Search supports advanced search operators, such as AND, OR, and NOT, to refine search queries further.</p>
</li>
<li><p><strong>Real-Time Indexing</strong>: As files and folders change or new ones are added, the index is updated in real-time to ensure the search results remain up-to-date.</p>
</li>
</ul>
<h2 id="heading-how-to-set-the-index-sources">How to set the index sources</h2>
<p>The Windows Index Search can look for text in multiple locations. You can specify the folders that you want to add to the database. To edit the directories included in the index you have to do the following steps:</p>
<ul>
<li><p>Win + S: press the "Windows key" plus the "key s" to open the Windows Search panel</p>
</li>
<li><p>Type Indexing Options to open the corresponding dialog</p>
</li>
</ul>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhU8eqwJSTwxKZPkQoB_g4HBSwgVj_q5TqkP58TKcuSQ-9cn-haiJlEzn3V1-8DEVZLp3jzHlU2JGVbM6RwrComOiH7O9t9INLq-jz-m10siV-hLyBOWzyIXb1yNjWoP91o5ehL1uso6-O4myuna10TsswjIVFCRRaBE2VCL_3LjRrbc21F_2z0qmmUQpO8/s16000/Windows%20Search%20pop-up.png" alt="Windows Search pop-up" class="image--center mx-auto" /></p>
<ul>
<li>Add the folders that you want to include in the index</li>
</ul>
<p>Once you have configured the folders to be included in the database, the Index Engine will start indexing all the documents included in those locations. Once it finishes, it will show a message saying that the indexing process is complete. From then on, you can do full-text searches on the specified directories.</p>
<h2 id="heading-how-to-do-full-text-searches-on-the-windows-explorer">How to do full-text searches on the Windows Explorer</h2>
<p>The Windows Explorer provides you with a search text box that allows you to look for files or directories. To do a full-text search using Windows Explorer you have to do the following steps:</p>
<ul>
<li><p>Open the Windows Explorer window</p>
</li>
<li><p>Navigate to the folder where you want to perform the search</p>
</li>
<li><p>Type in the search text box the term that you are looking for, specifying the content filter (e.g.: content: "design analysis")</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699883790487/fc238f87-502c-4d5d-9a65-05da2249bd8b.png" alt class="image--center mx-auto" /></p>
<p>As the contents of all the documents contained in that folder have been already indexed by the Windows Search Index, you will see how the results of documents containing that term (if any) start to emerge immediately.</p>
<h2 id="heading-how-to-do-full-text-searches-on-powershell">How to do full-text searches on PowerShell</h2>
<p>We are now going to see how we can use that functionality from a PowerShell script. It will use the APIs exposed by Windows to have access to the Windows Search Index database.</p>
<pre><code class="lang-powershell"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">SearchIndex</span></span> {
<span class="hljs-comment">&lt;#
<span class="hljs-doctag">.PARAMETER Path</span>
Absoloute or relative path. Has to be in the Search Index for results to be presented.
<span class="hljs-doctag">.PARAMETER Pattern</span>
File name or pattern to search for. Defaults to *.*. Aliased to Filter to ergonomically match Get-ChildItem.
<span class="hljs-doctag">.PARAMETER Text</span>
Free text to search for in the files defined by the pattern.
<span class="hljs-doctag">.PARAMETER Recurse</span>
Add the parameter to perform a recursive search. Default is false.
<span class="hljs-doctag">.PARAMETER AsFSInfo</span>
Add the parameter to return System.IO.FileSystemInfo objects instead of String objects.
<span class="hljs-doctag">.SYNOPSIS</span>
Uses the Windows Search index to search for files.
<span class="hljs-doctag">.DESCRIPTION</span>
Uses the Windows Search index to search for files. SQL Syntax documented at https://msdn.microsoft.com/en-us/library/windows/desktop/bb231256(v=vs.85).aspx Based on https://blogs.msdn.microsoft.com/mediaandmicrocode/2008/07/13/microcode-windows-powershell-windows-desktop-search-problem-solving/ 
<span class="hljs-doctag">.OUTPUTS</span>
By default one string per file found with full path.
If the AsFSInfo switch is set, one System.IO.FileSystemInfo object per file found is returned.
#&gt;</span>
    <span class="hljs-function">[<span class="hljs-type">CmdletBinding</span>()]</span>
    <span class="hljs-keyword">param</span> (  
        [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$Path</span> = <span class="hljs-variable">$PWD</span>,
        [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$Pattern</span> = <span class="hljs-string">"*.*"</span>,
        [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$Text</span> = <span class="hljs-variable">$null</span>,
        [<span class="hljs-type">switch</span>]<span class="hljs-variable">$Recurse</span> = <span class="hljs-variable">$true</span>,
        [<span class="hljs-type">switch</span>]<span class="hljs-variable">$AsFSInfo</span> = <span class="hljs-variable">$false</span>
    )

    <span class="hljs-variable">$Path</span> = (<span class="hljs-built_in">Resolve-Path</span> <span class="hljs-literal">-Path</span> <span class="hljs-variable">$Path</span>).Path

    <span class="hljs-variable">$Pattern</span> = <span class="hljs-variable">$Pattern</span> <span class="hljs-operator">-replace</span> <span class="hljs-string">"\*"</span>, <span class="hljs-string">"%"</span>
    <span class="hljs-variable">$Path</span> = <span class="hljs-variable">$Path</span>.Replace(<span class="hljs-string">'\'</span>,<span class="hljs-string">'/'</span>)

    <span class="hljs-keyword">if</span> ((<span class="hljs-built_in">Test-Path</span> <span class="hljs-literal">-Path</span> Variable:fsSearchCon) <span class="hljs-operator">-eq</span> <span class="hljs-variable">$false</span>)
    {
        <span class="hljs-variable">$global:fsSearchCon</span> = <span class="hljs-built_in">New-Object</span> <span class="hljs-literal">-ComObject</span> ADODB.Connection
        <span class="hljs-variable">$global:fsSearchRs</span> = <span class="hljs-built_in">New-Object</span> <span class="hljs-literal">-ComObject</span> ADODB.Recordset
    }

    <span class="hljs-variable">$fsSearchCon</span>.Open(<span class="hljs-string">"Provider=Search.CollatorDSO;Extended Properties='Application=Windows';"</span>)

    [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$queryString</span> = <span class="hljs-string">"SELECT System.ItemPathDisplay FROM SYSTEMINDEX WHERE System.FileName LIKE '"</span> + <span class="hljs-variable">$Pattern</span> + <span class="hljs-string">"' "</span>
    <span class="hljs-keyword">if</span> ([<span class="hljs-type">System.String</span>]::IsNullOrEmpty(<span class="hljs-variable">$Text</span>) <span class="hljs-operator">-eq</span> <span class="hljs-variable">$false</span>){
        <span class="hljs-variable">$queryString</span> += <span class="hljs-string">"AND System.Search.Contents='"</span> + <span class="hljs-variable">$Text</span> + <span class="hljs-string">"' "</span>
    }

    <span class="hljs-keyword">if</span> (<span class="hljs-variable">$Recurse</span>){
        <span class="hljs-variable">$queryString</span> += <span class="hljs-string">"AND SCOPE='file:"</span> + <span class="hljs-variable">$Path</span> + <span class="hljs-string">"' ORDER BY System.ItemPathDisplay "</span>
    }
    <span class="hljs-keyword">else</span> {
        <span class="hljs-variable">$queryString</span> += <span class="hljs-string">"AND DIRECTORY='file:"</span> + <span class="hljs-variable">$Path</span> + <span class="hljs-string">"' ORDER BY System.ItemPathDisplay "</span>
    }

    <span class="hljs-variable">$fsSearchRs</span>.Open(<span class="hljs-variable">$queryString</span>, <span class="hljs-variable">$fsSearchCon</span>)

    <span class="hljs-keyword">While</span>(<span class="hljs-operator">-Not</span> <span class="hljs-variable">$fsSearchRs</span>.EOF){
        <span class="hljs-keyword">if</span> (<span class="hljs-variable">$AsFSInfo</span>){
            [<span class="hljs-type">System.IO.FileSystemInfo</span>]<span class="hljs-variable">$</span>(<span class="hljs-built_in">Get-Item</span> <span class="hljs-literal">-LiteralPath</span> (<span class="hljs-variable">$fsSearchRs</span>.Fields.Item(<span class="hljs-string">"System.ItemPathDisplay"</span>).Value) <span class="hljs-literal">-Force</span>)
        }
        <span class="hljs-keyword">else</span> {
            <span class="hljs-variable">$fsSearchRs</span>.Fields.Item(<span class="hljs-string">"System.ItemPathDisplay"</span>).Value
        }
        <span class="hljs-variable">$fsSearchRs</span>.MoveNext()
    }
    <span class="hljs-variable">$fsSearchRs</span>.Close()
    <span class="hljs-variable">$fsSearchCon</span>.Close()
}

SearchIndex @args
</code></pre>
<p>You can copy this script to a file named <strong>SearchIndex.ps1</strong>, which you can edit to adapt any part that you want. And if Neovim is your editor of choice and you’d like a clear, practical introduction to it, I cover that in detail in my <a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV">book</a>.</p>
<p><a target="_blank" href="https://www.amazon.com/dp/B0CCW8PGKV"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765790582088/ca3a4ab5-125b-4383-a8ec-3c0b524d23ff.png" alt class="image--center mx-auto" /></a></p>
<p>As long as this file is contained in one of the directories defined in the Path environment variable, you will be able to run it from any location on the PowerShell terminal. The script supports several arguments. It looks recursively in the folders contained in the current directory by default. It looks over all kinds of documents by default too. But the argument -Pattern allows you to filter out the results just to the files of the specified extension.</p>
<pre><code class="lang-bash">PS C:\&gt;SearchIndex.ps1 -Text <span class="hljs-string">'design analysis'</span> -Pattern <span class="hljs-string">'*.pdf'</span>
</code></pre>
<h2 id="heading-how-to-do-full-text-searches-from-the-windows-run-dialog">How to do full-text searches from the Windows run dialog</h2>
<p>Windows has a very useful feature that allows you to quickly execute a command. To do so, you just have to press the "Windows key" plus the "key r", and you will see a pop-up dialog called Run. You can type there any program that is available on the Path environment variable, and execute it.</p>
<p>In our case, we want to trigger the execution of a full-text search over any document contained in the Windows Search Index database from that Run dialog. That way, we would just need to type "Win + r" and the search command together with the search term to do a full-text search over all the target documents on our machine. Not bad, right?</p>
<p>The problem is that this Run dialog can only run <strong>batch</strong> files, not PowerShell scripts. Therefore, to overcome that limitation, we have to create a wrapper batch file that will call our PowerShell script. And add the location of this wrapper batch file to our Path environment variable.</p>
<p>We are going to create a <strong>search-text.bat</strong> file, that will allow us to perform a full-text search over any pdf file contained in a specific directory, which belongs to the Windows Search Index database.</p>
<pre><code class="lang-bash">@<span class="hljs-built_in">echo</span> off

Rem Description: Search text <span class="hljs-keyword">in</span> Windows Search Index pdf files
Rem Usage: search-text.bat {strings}
Rem Example: search-text.bat Seamless

<span class="hljs-built_in">set</span> cmd=<span class="hljs-string">"SearchIndex.ps1 -Path 'C:\Users\alem\Documents' -Pattern *.pdf -Text '%*' -Recurse"</span>
<span class="hljs-built_in">echo</span> %cmd%

PowerShell -ExecutionPolicy Bypass -Command %cmd%
Pause
</code></pre>
<p>Once we have done that, we can use the Run dialog to execute the SearchIndex script to search for all the PDF documents from that directory that have been indexed in the Windows Search Index database.</p>
<p><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioj2tlXBoeuIDa9ixPrfHRKLpuJNnmRYOmhyHv29No5AoB175QwxoKsddgl2vVNa4iU9aRzqfOU-m9TANcoPyrIzUxPI-Lh9SJ38zZ5UBxrdW1WpLNcJ4qOlGclKy9rgiXBPHZHFmfjfu4STuIBHFOOJJn_b4YhQm66jM--JHg_np2OnNIoAhzBV2EQfUV/s16000/Run%20dialog.png" alt="Run dialog" class="image--center mx-auto" /></p>
<p>And that's all. With this utility, you can search for any text at lightning speed on your computer. It is so fast, that it almost feels like you have a Google Search Engine on your local laptop.</p>
]]></content:encoded></item><item><title><![CDATA[How to write a full duplex server in Go]]></title><description><![CDATA[In the previous article How to write a concurrent TCP server in Go we saw how to implement a concurrent TCP server in Go. This time we are going to see how to take the server to the next level and allow it to broadcast messages to its clients. That w...]]></description><link>https://textmode.dev/how-to-write-a-full-duplex-server-in-go</link><guid isPermaLink="true">https://textmode.dev/how-to-write-a-full-duplex-server-in-go</guid><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 05 Nov 2023 12:36:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699186032597/200c4217-2630-4c36-957c-704a8daf80a9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the previous article <a target="_blank" href="https://terminalprogrammer.hashnode.dev/how-to-write-a-concurrent-tcp-server-in-go">How to write a concurrent TCP server in Go</a> we saw how to implement a concurrent TCP server in Go. This time we are going to see how to take the server to the next level and allow it to broadcast messages to its clients. That way, we can start a communication from either end of the connection. Thus, ending up in a full duplex scenario.</p>
<p>We are going to implement a mechanism that allows the server to send messages to all of its clients at the same time. If we wanted to send messages to a specific client, it would just be a matter of keeping track of an ID for each client associated with their corresponding connection. And use that mapping to send the messages to a specific client.</p>
<h2 id="heading-client-connection">Client connection</h2>
<p>We will start by modeling each client connection with a reference to the server, a reference to the actual network connection and a buffered channel of 256 strings. This channel will be used to store responses as a buffer before sending them back to the client. That way, we can detach the process of sending messages back to the clients from the process of handling their requests.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">type</span> connection <span class="hljs-keyword">struct</span> {
    s         *server
    conn      net.Conn
    responses <span class="hljs-keyword">chan</span> <span class="hljs-keyword">string</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newConnection</span><span class="hljs-params">(s *server, conn net.Conn)</span> *<span class="hljs-title">connection</span></span> {
    <span class="hljs-keyword">var</span> c connection
    c.s = s
    c.conn = conn
    c.responses = <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> <span class="hljs-keyword">string</span>, <span class="hljs-number">256</span>)
    <span class="hljs-keyword">return</span> &amp;c
}
</code></pre>
<p>The client connection processing will be composed of two goroutines. One of them will be in charge of reading requests from the client, and sending them to the server. Each client request will be a text line.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">readConnection</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">defer</span> c.s.removeConnection(c)

    buf := bufio.NewReader(c.conn)

    <span class="hljs-keyword">for</span> {
        data, err := buf.ReadString(<span class="hljs-string">'\n'</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">break</span>
        }
        c.s.submitRequest(c, data)
    }
}
</code></pre>
<p>The other goroutine will iterate over the responses channel, and every time a new response is added to the channel, this goroutine will send it to the client through the network connection. It will keep iterating over the responses channel, so in case there are no more responses ready to be sent back to the client, this goroutine will wait until a new one gets to the channel.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">writeConnection</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> message := <span class="hljs-keyword">range</span> c.responses {
        c.conn.Write([]<span class="hljs-keyword">byte</span>(message))
    }
}
</code></pre>
<h2 id="heading-server">Server</h2>
<p>The server object will keep the client connections in a <a target="_blank" href="http://sync.Map">sync.Map</a>. This is needed because the list of connections can be accessed from different goroutines at the same time: the ones that add and remove references to the map, and the ones that broadcast responses to the clients. Similarly to the client connection, the server will contain a channel of 256 requests. That way, it can buffer the requests coming from the clients without the need to block any of them. It will also contain a handle function that allows the user to specify a function to process the requests.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">type</span> server <span class="hljs-keyword">struct</span> {
    connections sync.Map
    handle      handleFn
    requests    <span class="hljs-keyword">chan</span> *request
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newServer</span><span class="hljs-params">(handle handleFn)</span> *<span class="hljs-title">server</span></span> {
    <span class="hljs-keyword">var</span> s server
    s.handle = handle
    s.requests = <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> *request, <span class="hljs-number">256</span>)
    <span class="hljs-keyword">return</span> &amp;s
}
</code></pre>
<p>The server processing will be composed of a goroutine that accepts new network connections and stores them in the <a target="_blank" href="http://sync.Map">sync.Map</a>.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">serve</span><span class="hljs-params">(network, address <span class="hljs-keyword">string</span>)</span></span> {
    l, err := net.Listen(network, address)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(err)
    }
    <span class="hljs-keyword">defer</span> l.Close()

    <span class="hljs-keyword">for</span> {
        c, err := l.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Fatal(err)
        }
        connection := newConnection(s, c)
        s.connections.Store(c.RemoteAddr().String(), connection)
        connection.start()

        log.Printf(<span class="hljs-string">"New connection %s"</span>, c.RemoteAddr().String())
    }
}
</code></pre>
<p>It will also contain several goroutines that handle client requests. This number of goroutines is the same as the number of CPUs available in the server machine. Each one of these worker goroutines will execute the handle function of the server to process the requests.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">start</span><span class="hljs-params">(network, address <span class="hljs-keyword">string</span>)</span></span> {
    <span class="hljs-keyword">go</span> s.serve(network, address)

    numCpu := runtime.NumCPU()
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; numCpu; i++ {
        <span class="hljs-keyword">go</span> s.worker()
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">worker</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> req := <span class="hljs-keyword">range</span> s.requests {
        s.handle(req.c, req.data)
    }
}
</code></pre>
<p>The server will also expose a method to broadcast messages to the clients. This method will iterate over the client connections <a target="_blank" href="http://sync.Map">sync.Map</a>, and will send the message to each client. That message will then be queued in the requests channel of the client connection.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">broadcast</span><span class="hljs-params">(message <span class="hljs-keyword">string</span>)</span></span> {
    s.connections.Range(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(k, v <span class="hljs-keyword">interface</span>{})</span> <span class="hljs-title">bool</span></span> {
        c := v.(*connection)
        c.send(message)
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>
    })
}
</code></pre>
<h2 id="heading-complete-code">Complete code</h2>
<p>After going over the main parts of the example, here you have the complete code of the server.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"runtime"</span>
    <span class="hljs-string">"sync"</span>
)

<span class="hljs-keyword">type</span> connection <span class="hljs-keyword">struct</span> {
    s         *server
    conn      net.Conn
    responses <span class="hljs-keyword">chan</span> <span class="hljs-keyword">string</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newConnection</span><span class="hljs-params">(s *server, conn net.Conn)</span> *<span class="hljs-title">connection</span></span> {
    <span class="hljs-keyword">var</span> c connection
    c.s = s
    c.conn = conn
    c.responses = <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> <span class="hljs-keyword">string</span>, <span class="hljs-number">256</span>)
    <span class="hljs-keyword">return</span> &amp;c
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">start</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">go</span> c.readConnection()
    <span class="hljs-keyword">go</span> c.writeConnection()
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">stop</span><span class="hljs-params">()</span></span> {
    <span class="hljs-built_in">close</span>(c.responses)
    c.conn.Close()
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">send</span><span class="hljs-params">(data <span class="hljs-keyword">string</span>)</span></span> {
    c.responses &lt;- data
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">readConnection</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">defer</span> c.s.removeConnection(c)

    buf := bufio.NewReader(c.conn)

    <span class="hljs-keyword">for</span> {
        data, err := buf.ReadString(<span class="hljs-string">'\n'</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            <span class="hljs-keyword">break</span>
        }
        c.s.submitRequest(c, data)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(c *connection)</span> <span class="hljs-title">writeConnection</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> message := <span class="hljs-keyword">range</span> c.responses {
        c.conn.Write([]<span class="hljs-keyword">byte</span>(message))
    }
}

<span class="hljs-keyword">type</span> request <span class="hljs-keyword">struct</span> {
    c    *connection
    data <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> handleFn <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(*connection, <span class="hljs-keyword">string</span>)</span></span>

<span class="hljs-keyword">type</span> server <span class="hljs-keyword">struct</span> {
    connections sync.Map
    handle      handleFn
    requests    <span class="hljs-keyword">chan</span> *request
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">newServer</span><span class="hljs-params">(handle handleFn)</span> *<span class="hljs-title">server</span></span> {
    <span class="hljs-keyword">var</span> s server
    s.handle = handle
    s.requests = <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> *request, <span class="hljs-number">256</span>)
    <span class="hljs-keyword">return</span> &amp;s
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">submitRequest</span><span class="hljs-params">(c *connection, data <span class="hljs-keyword">string</span>)</span></span> {
    req := request{c, data}
    s.requests &lt;- &amp;req
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">start</span><span class="hljs-params">(network, address <span class="hljs-keyword">string</span>)</span></span> {
    <span class="hljs-keyword">go</span> s.serve(network, address)

    numCpu := runtime.NumCPU()
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; numCpu; i++ {
        <span class="hljs-keyword">go</span> s.worker()
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">worker</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">for</span> req := <span class="hljs-keyword">range</span> s.requests {
        s.handle(req.c, req.data)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">serve</span><span class="hljs-params">(network, address <span class="hljs-keyword">string</span>)</span></span> {
    l, err := net.Listen(network, address)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(err)
    }
    <span class="hljs-keyword">defer</span> l.Close()

    <span class="hljs-keyword">for</span> {
        c, err := l.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Fatal(err)
        }
        connection := newConnection(s, c)
        s.connections.Store(c.RemoteAddr().String(), connection)
        connection.start()

        log.Printf(<span class="hljs-string">"New connection %s"</span>, c.RemoteAddr().String())
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">removeConnection</span><span class="hljs-params">(c *connection)</span></span> {
    c.stop()
    s.connections.Delete(c.conn.RemoteAddr().String())

    log.Printf(<span class="hljs-string">"Closed connection %s"</span>, c.conn.RemoteAddr().String())
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">broadcast</span><span class="hljs-params">(message <span class="hljs-keyword">string</span>)</span></span> {
    s.connections.Range(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(k, v <span class="hljs-keyword">interface</span>{})</span> <span class="hljs-title">bool</span></span> {
        c := v.(*connection)
        c.send(message)
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>
    })
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    arguments := os.Args
    <span class="hljs-keyword">if</span> <span class="hljs-built_in">len</span>(arguments) != <span class="hljs-number">3</span> {
        log.Fatal(<span class="hljs-string">"Usage: server &lt;network&gt; &lt;address&gt;"</span>)
    }

    network := arguments[<span class="hljs-number">1</span>]
    address := <span class="hljs-string">":"</span> + arguments[<span class="hljs-number">2</span>]

    s := newServer(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(c *connection, request <span class="hljs-keyword">string</span>)</span></span> {
        c.send(request)
    })
    s.start(network, address)

    fmt.Print(<span class="hljs-string">"Enter message: \n"</span>)
    reader := bufio.NewReader(os.Stdin)
    <span class="hljs-keyword">for</span> {
        text, _ := reader.ReadString(<span class="hljs-string">'\n'</span>)
        s.broadcast(text)
    }
}
</code></pre>
<h2 id="heading-testing-the-example">Testing the example</h2>
<p>We can test the server by running it on a terminal specifying the kind of network for the sockets, which can be TCP or UNIX.</p>
<h3 id="heading-tcp-sockets-example">TCP sockets example</h3>
<p>We start the sever in one terminal, and test it from another terminal using netcat (<strong>nc</strong>).</p>
<pre><code class="lang-bash">
<span class="hljs-comment"># TCP Server</span>
./server tcp &lt;port&gt;

<span class="hljs-comment"># TCP Client</span>
nc localhost &lt;port&gt;
</code></pre>
<p>As soon as we type any line on the netcat terminal, we will see an echo message coming back from the server.</p>
<pre><code class="lang-bash">
&gt;nc localhost 8080
test1
test1
test2
test2
</code></pre>
<p>From the server terminal we can also broadcast messages to the connected clients. If type a line on the server terminal, we will see it appearing on the client terminal.</p>
<pre><code class="lang-bash">
&gt;./main tcp 8080
Enter message:
2023/08/11 12:32:50 New connection 127.0.0.1:34498
message1
</code></pre>
<pre><code class="lang-bash">
&gt;nc localhost 8080
test1
test1
test2
test2
message1
</code></pre>
<h3 id="heading-unix-domain-sockets-example">UNIX Domain Sockets example</h3>
<p>For the case of UNIX sockets it is pretty similar. It is just a matter of starting the server in one terminal using the unix as network type.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># UNIX Server</span>
./server unix &lt;socket&gt;
<span class="hljs-comment"># example</span>
./server unix <span class="hljs-built_in">test</span>

<span class="hljs-comment"># UNIX Client</span>
nc -U &lt;socket&gt;
<span class="hljs-comment"># example:</span>
nc -U :<span class="hljs-built_in">test</span>
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, we have seen an example of how to code a simple full duplex server that can be used with TCP or UNIX Domain Sockets. Unlike half-duplex connections, where data can only flow in one direction at a time, a full duplex connection allows for seamless and real-time exchange of information in both directions concurrently. This capability enhances the efficiency and speed of communication, making it ideal for applications such as video conferencing, online gaming, and data-intensive tasks.</p>
]]></content:encoded></item><item><title><![CDATA[How to write a concurrent TCP server in Go]]></title><description><![CDATA[In the world of networking and server development, concurrency is a fundamental aspect that enables handling multiple clients simultaneously, leading to better performance and responsiveness. Go, with its built-in concurrency features, provides an ex...]]></description><link>https://textmode.dev/how-to-write-a-concurrent-tcp-server-in-go</link><guid isPermaLink="true">https://textmode.dev/how-to-write-a-concurrent-tcp-server-in-go</guid><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 29 Oct 2023 19:31:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698607790406/8340f724-05e9-4b20-8a0f-9c76314e9fdf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of networking and server development, concurrency is a fundamental aspect that enables handling <strong>multiple</strong> clients simultaneously, leading to better performance and responsiveness. Go, with its built-in concurrency features, provides an excellent platform for building efficient TCP servers. In this article, we'll explore the process of creating a TCP concurrent server in Go, taking advantage of Goroutines to handle multiple client connections concurrently.</p>
<h2 id="heading-understanding-tcp-concurrent-servers">Understanding TCP Concurrent Servers</h2>
<p>A <strong>TCP server</strong> is a fundamental component of network programming that listens for incoming connections from clients and responds to their requests. It operates on the Transmission Control Protocol (<strong>TCP</strong>), a reliable and connection-oriented protocol that guarantees the delivery of data packets in the order they were sent. The TCP server binds to a specific <strong>IP</strong> address and <strong>port</strong> number, waiting for clients to establish connections. Once a connection is established, the server can exchange data with the client <strong>bidirectionally</strong>, allowing for real-time communication. TCP servers are widely used in various applications, such as web servers, chat applications, file transfer systems, and more, providing a robust foundation for building scalable and efficient network services.</p>
<p>A <strong>concurrent TCP server</strong> is an advanced networking application that combines the power of concurrent programming with the capabilities of a TCP server. Unlike traditional TCP servers, a concurrent TCP server can handle <strong>multiple</strong> client connections concurrently, allowing it to process multiple requests simultaneously without blocking other clients. This is achieved by utilizing concurrency primitives like Goroutines in Go or Threads in other programming languages. By employing parallelism, a concurrent TCP server can efficiently serve a large number of clients, enhancing its responsiveness and scalability. This makes it a preferred choice for high-traffic and real-time applications, such as chat servers, multiplayer games, or any system requiring simultaneous interactions with multiple clients. The use of concurrency in TCP servers significantly improves performance and resource utilization, making it a crucial aspect of modern network programming.</p>
<h2 id="heading-the-power-of-goroutines">The Power of Goroutines</h2>
<p>Goroutines are a key feature of the Go programming language, and they are the building blocks of concurrent programming in Go. A Goroutine is a <strong>lightweight</strong>, independent execution unit that allows developers to execute functions concurrently without the complexities of traditional threads. Goroutines are highly efficient and can be created and destroyed with minimal overhead. They enable concurrent execution of tasks, allowing multiple operations to run simultaneously without the need for explicit thread management. Due to their lightweight nature, Goroutines make it easy to scale applications and handle a large number of concurrent operations efficiently. With Goroutines, Go programmers can write concurrent code with ease, enabling the creation of highly responsive and efficient applications. They are an essential feature for building highly scalable and responsive servers.</p>
<h2 id="heading-implementing-the-tcp-concurrent-server">Implementing the TCP Concurrent Server</h2>
<p>Let's create a simple TCP server that handles multiple client connections concurrently using Goroutines. It will create a <strong>goroutine</strong> for each new incoming TCP connection. That goroutine will execute an endless loop, where it just basically returns an echo of the received message to the client. Unless the message is the string "STOP", in which case, the goroutine for that connection will end, and the server will stop serving that client.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net"</span>
    <span class="hljs-string">"strings"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    l, err := net.Listen(<span class="hljs-string">"tcp"</span>, <span class="hljs-string">":8080"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(err)
    }
    <span class="hljs-keyword">defer</span> l.Close()

    <span class="hljs-keyword">for</span> {
        c, err := l.Accept()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Fatal(err)
        }
        <span class="hljs-keyword">go</span> handleConnection(c)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleConnection</span><span class="hljs-params">(c net.Conn)</span></span> {
    <span class="hljs-keyword">defer</span> c.Close()

    connReader := bufio.NewReader(c)

    log.Printf(<span class="hljs-string">"Serving %s\n"</span>, c.RemoteAddr().String())
    <span class="hljs-keyword">for</span> {
        data, err := connReader.ReadString(<span class="hljs-string">'\n'</span>)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(err)
            <span class="hljs-keyword">break</span>
        }
        request := strings.TrimSpace(<span class="hljs-keyword">string</span>(data))

        log.Printf(<span class="hljs-string">"Request from %s: %s\n"</span>, c.RemoteAddr().String(), request)
        <span class="hljs-keyword">if</span> request == <span class="hljs-string">"STOP"</span> {
            <span class="hljs-keyword">break</span>
        }
        c.Write([]<span class="hljs-keyword">byte</span>(data))
    }
    log.Printf(<span class="hljs-string">"Stop serving %s\n"</span>, c.RemoteAddr().String())
}
</code></pre>
<p>Once we have our server ready, we can test it using <strong>netcat</strong> in Linux. In order to do so, we just need to start the server in one terminal. And run the netcat command in another terminal using the address <a target="_blank" href="http://localhost">localhost</a> and port 8080 to connect to the server. Then, we can start typing strings, and every time we press Enter we will see an echo of that message received from the server.</p>
<pre><code class="lang-bash">
&gt;nc localhost 8080
test1
test1
test2
test2
</code></pre>
<p>Meanwhile, in the server terminal we will see the messages received from the client. We can connect multiple clients from multiple terminal running the netcat in each one of them to check how the server is able to handle all their requests.</p>
<pre><code class="lang-bash">
&gt;./main
2023/08/02 10:13:35 Serving 127.0.0.1:58740
2023/08/02 10:13:38 Request from 127.0.0.1:58740: test1
2023/08/02 10:13:38 Request from 127.0.0.1:58740: test2
2023/08/02 10:13:51 Serving 127.0.0.1:58742
2023/08/02 10:13:57 Request from 127.0.0.1:58742: test3
2023/08/02 10:13:58 Request from 127.0.0.1:58742: test4
2023/08/02 10:14:01 Request from 127.0.0.1:58742: STOP
2023/08/02 10:14:01 Stop serving 127.0.0.1:58742
2023/08/02 10:14:04 Request from 127.0.0.1:58740: test5
2023/08/02 10:14:05 Request from 127.0.0.1:58740: test6
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, we explored the process of building a TCP concurrent server in Go, utilizing Goroutines to handle multiple client connections simultaneously. By leveraging the power of concurrency through Goroutines, our server can efficiently handle a large number of clients, leading to improved performance and responsiveness. With the knowledge gained from this example, you can further extend the server to incorporate your own business logic and create powerful network applications. Go's built-in concurrency features make it an ideal choice for developing high-performance servers, and mastering the art of concurrent programming opens up a world of possibilities for creating scalable and efficient applications.</p>
]]></content:encoded></item><item><title><![CDATA[How to use the jsonrpc codec with websockets in Go]]></title><description><![CDATA[WebSockets and JSON-RPC are powerful technologies that can significantly enhance real-time communication and data exchange in web applications. WebSockets provide full-duplex communication channels over a single TCP connection, while JSON-RPC is a li...]]></description><link>https://textmode.dev/how-to-use-the-jsonrpc-codec-with-websockets-in-go</link><guid isPermaLink="true">https://textmode.dev/how-to-use-the-jsonrpc-codec-with-websockets-in-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[json]]></category><category><![CDATA[jsonrpc]]></category><category><![CDATA[websockets]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sun, 15 Oct 2023 19:11:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696328771956/ff1a7c77-17b6-4530-abbb-3809273aaddc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>WebSockets and JSON-RPC are powerful technologies that can significantly enhance real-time communication and data exchange in web applications. WebSockets provide full-duplex communication channels over a single TCP connection, while JSON-RPC is a lightweight remote procedure call protocol based on JSON. In this article, we'll explore how to leverage these technologies together to implement an efficient and scalable server in Go.</p>
<h2 id="heading-understanding-json-rpc">Understanding JSON-RPC</h2>
<p>JSON-RPC is a simple yet effective protocol for remote procedure calls over HTTP or other transport layers. It allows clients to invoke methods on a server and receive responses in a structured JSON format. The protocol is lightweight, language-agnostic, and easy to implement, making it an excellent choice for web applications.</p>
<p>We are going to use the version <strong>JSON-RPC 2.0</strong> which follows this format:</p>
<pre><code class="lang-bash">
     {<span class="hljs-string">"jsonrpc"</span>:<span class="hljs-string">"2.0"</span>,<span class="hljs-string">"method"</span>:<span class="hljs-string">"HelloService.Hello"</span>,<span class="hljs-string">"params"</span>:{<span class="hljs-string">"msg"</span>:<span class="hljs-string">"John"</span>},<span class="hljs-string">"id"</span>:0}
</code></pre>
<h2 id="heading-using-websockets-in-go">Using WebSockets in Go</h2>
<p>WebSockets enable bidirectional communication between clients and servers, allowing real-time data transfer. In Go, the standard library provides support for WebSockets through the <a target="_blank" href="http://golang.org/x/net/websocket"><strong>golang.org/x/net/websocket</strong></a> package. There are other more advanced packages for WebSockets, like the <strong>gorilla/websocket</strong> package, but I'll stick to the standard one for this example, as it is simpler.</p>
<h2 id="heading-implementing-a-json-rpc-server-using-websockets-in-go">Implementing a JSON-RPC Server using WebSockets in Go</h2>
<p>For this example, let's create a simple JSON-RPC server in Go that communicates with clients over WebSockets.</p>
<h3 id="heading-step-1-define-the-service">Step 1: Define the service</h3>
<p>Make sure you have Go installed and set up a Go workspace. Create a new directory for the project, and inside it, create the Go file named <strong>internal/service/hello.go</strong> containing the definition of the service that the server is going to expose through a WebSockets interface.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> service

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"log"</span>
)

<span class="hljs-keyword">type</span> HelloService <span class="hljs-keyword">struct</span>{}

<span class="hljs-keyword">type</span> HelloRequest <span class="hljs-keyword">struct</span> {
    Name <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> HelloResponse <span class="hljs-keyword">struct</span> {
    Greeting <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"string"`</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *HelloService)</span> <span class="hljs-title">Hello</span><span class="hljs-params">(req *HelloRequest, res *HelloResponse)</span> <span class="hljs-title">error</span></span> {
    log.Println(<span class="hljs-string">"Execute method: HelloService.Hello()"</span>)
    res.Greeting = <span class="hljs-string">"Hello: "</span> + req.Name
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}
</code></pre>
<h3 id="heading-step-2-implementing-the-json-rpc-server">Step 2: Implementing the JSON-RPC Server</h3>
<p>Then we create the file <strong>cmd/server/main.go</strong> file to set up a WebSocket server and handle incoming JSON-RPC requests.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bytes"</span>
    <span class="hljs-string">"golang.org/x/net/websocket"</span>
    <span class="hljs-string">"io"</span>
    <span class="hljs-string">"io/ioutil"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"net/rpc"</span>
    <span class="hljs-string">"net/rpc/jsonrpc"</span>

    <span class="hljs-string">"go-websocket-jsonrpc/internal/service"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">wsHandleRequest</span><span class="hljs-params">(ws *websocket.Conn)</span></span> {
    <span class="hljs-keyword">for</span> {
        <span class="hljs-keyword">var</span> req []<span class="hljs-keyword">byte</span>
        err := websocket.Message.Receive(ws, &amp;req)
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"ReadMessage:"</span>, err)
            <span class="hljs-keyword">return</span>
        }

        log.Println(<span class="hljs-string">"ServeRequest..."</span>)
        <span class="hljs-keyword">var</span> res bytes.Buffer
        err = rpc.ServeRequest(jsonrpc.NewServerCodec(<span class="hljs-keyword">struct</span> {
            io.ReadCloser
            io.Writer
        }{
            ioutil.NopCloser(bytes.NewReader(req)),
            &amp;res,
        }))
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"ServeRequest:"</span>, err)
            <span class="hljs-keyword">return</span>
        }

        err = websocket.Message.Send(ws, res.Bytes())
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Println(<span class="hljs-string">"WriteMessage:"</span>, err)
            <span class="hljs-keyword">return</span>
        }
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    log.Println(<span class="hljs-string">"Starting http server"</span>)

    rpc.Register(&amp;service.HelloService{})

    http.Handle(<span class="hljs-string">"/ws"</span>, websocket.Handler(wsHandleRequest))
    http.ListenAndServe(<span class="hljs-string">"localhost:8080"</span>, <span class="hljs-literal">nil</span>)
}
</code></pre>
<h3 id="heading-step-3-implementing-the-json-rpc-client">Step 3: Implementing the JSON-RPC Client</h3>
<p>Finally, we create the file cmd/client/main.go to implement a Go WebSocket client that reads strings from the command line, sends them to the server, and prints out the responses.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"golang.org/x/net/websocket"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/rpc"</span>
    <span class="hljs-string">"net/rpc/jsonrpc"</span>
    <span class="hljs-string">"os"</span>

    <span class="hljs-string">"go-websocket-jsonrpc/internal/service"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">sayHello</span><span class="hljs-params">(c *rpc.Client, name <span class="hljs-keyword">string</span>)</span></span> {
    req := service.HelloRequest{Name: name}
    <span class="hljs-keyword">var</span> res service.HelloResponse

    err := c.Call(<span class="hljs-string">"HelloService.Hello"</span>, req, &amp;res)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(<span class="hljs-string">"error:"</span>, err)
    }
    log.Printf(<span class="hljs-string">"Response: %s"</span>, res.Greeting)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    ws, err := websocket.Dial(<span class="hljs-string">"ws://localhost:8080/ws"</span>, <span class="hljs-string">""</span>, <span class="hljs-string">"http://localhost/"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatal(err)
    }
    <span class="hljs-keyword">defer</span> ws.Close()

    c := jsonrpc.NewClient(ws)

    reader := bufio.NewReader(os.Stdin)
    <span class="hljs-keyword">for</span> {
        text, _ := reader.ReadString(<span class="hljs-string">'\n'</span>)
        sayHello(c, text)
    }
}
</code></pre>
<h3 id="heading-step-4-test-the-example">Step 4: Test the example</h3>
<p>Now that we have the server and client ready, we start the server in one terminal, and the client on another one. Every time we type a string in the client terminal and press Enter, the client will send that string to the server, and we will see right away the response from the server.</p>
<pre><code class="lang-bash">
&gt;./bin/client
./bin/client
john
2023/08/02 05:27:57 Response: Hello: john
lisa
2023/08/02 05:28:07 Response: Hello: lisa
tom
2023/08/02 05:28:08 Response: Hello: tom
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>WebSockets and JSON-RPC can work harmoniously together, providing an efficient and real-time communication mechanism for web applications. In this article, we explored how to implement a JSON-RPC server using WebSockets in Go. You can build upon this example to create more sophisticated applications with real-time updates and bidirectional data flow. Whether it's real-time chats, live notifications, or dynamic dashboards, the combination of WebSockets and JSON-RPC in Go is a powerful solution for modern web development.</p>
]]></content:encoded></item><item><title><![CDATA[How to use gRPC in Go]]></title><description><![CDATA[In the realm of modern distributed systems, efficient and reliable communication between microservices is crucial. Traditional protocols like REST have served well, but as systems grow in complexity and demand real-time communication, gRPC emerges as...]]></description><link>https://textmode.dev/how-to-use-grpc-in-go</link><guid isPermaLink="true">https://textmode.dev/how-to-use-grpc-in-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[gRPC]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sat, 23 Sep 2023 09:02:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695459428201/08ff5dc6-2f08-486e-8de6-fe3871ca56f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the realm of modern distributed systems, efficient and reliable communication between microservices is crucial. Traditional protocols like REST have served well, but as systems grow in complexity and demand real-time communication, gRPC emerges as a powerful solution. In this article, we'll delve into gRPC, understand its benefits, and explore a practical example in Go.</p>
<h2 id="heading-what-is-grpc">What is gRPC?</h2>
<p>gRPC is an open-source high-performance Remote Procedure Call (RPC) framework developed by Google. It enables seamless communication between services running on different platforms and languages. gRPC is based on <strong>HTTP/2</strong>, which provides multiplexing, stream prioritization, and header compression, leading to faster and more efficient communication compared to REST.</p>
<h2 id="heading-key-features-and-advantages-of-grpc">Key Features and Advantages of gRPC</h2>
<p><strong>Protocol Buffers (Protobuf)</strong>: gRPC uses Protocol Buffers as the default data serialization mechanism. Protobuf offers efficient binary serialization, reducing payload size and facilitating faster data transmission.</p>
<p><strong>Bidirectional Streaming</strong>: Unlike REST, where the client sends a request and waits for a response, gRPC supports bidirectional streaming. This means both the client and server can send multiple messages asynchronously over a single connection, ideal for real-time applications.</p>
<p><strong>Code Generation</strong>: gRPC generates client and server code in various languages (including Go, Java, Python, and more) from a single Protobuf definition. This reduces the manual effort and minimizes the risk of errors when integrating services.</p>
<p><strong>Strong Typing</strong>: Protobuf enforces strong typing for message structures, ensuring the consistency and integrity of data exchanged between services.</p>
<p><strong>Unary and Streaming Calls</strong>: gRPC supports unary calls (single request and response) as well as streaming calls (continuous stream of requests or responses). This flexibility caters to various communication scenarios.</p>
<h2 id="heading-practical-example-building-a-greet-service-in-go">Practical Example: Building a Greet Service in Go</h2>
<p>Let's create a simple gRPC server and client to demonstrate the power of gRPC using Go. They will show an example of a single request and response. As well as an example of a continuous stream from the server to the client.</p>
<h3 id="heading-install-protobuf-compiler">Install Protobuf Compiler</h3>
<p>First, we need to install the protobuf <strong>compiler</strong>. We must ensure that the version is 3+.</p>
<pre><code class="lang-bash">
    sudo apt install -y protobuf-compiler
</code></pre>
<p>Then, we have to install the Go <strong>plugins</strong> for the protobuf compiler.</p>
<pre><code class="lang-bash">
    go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
    go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
</code></pre>
<h3 id="heading-define-the-protobuf-messages">Define the Protobuf Messages</h3>
<p>Create a file named <strong>idl/service/service.proto</strong> and define the Greeter service and message structure using Protocol Buffers.</p>
<pre><code class="lang-go">
syntax = <span class="hljs-string">"proto3"</span>;

option go_package = <span class="hljs-string">"internal/service"</span>;

<span class="hljs-keyword">package</span> service;

<span class="hljs-comment">// The greeting service definition.</span>
service Greeter {
  <span class="hljs-comment">// Sends a greeting</span>
  rpc SayHello (HelloRequest) returns (HelloReply) {}

  <span class="hljs-comment">// Subscription to receive a continuous stream of notifications</span>
  rpc Subscribe (Subscription) returns (stream Notification) {}
}

<span class="hljs-comment">// Messages definition</span>

message HelloRequest {
  <span class="hljs-keyword">string</span> name = <span class="hljs-number">1</span>;
}

message HelloReply {
  <span class="hljs-keyword">string</span> message = <span class="hljs-number">1</span>;
}

message Subscription {
    <span class="hljs-keyword">string</span> address = <span class="hljs-number">1</span>;
}

message Notification {
    <span class="hljs-keyword">string</span> message = <span class="hljs-number">1</span>;
}
</code></pre>
<h3 id="heading-generate-go-code-from-protobuf">Generate Go Code from Protobuf</h3>
<p>Run the protobuf compiler to compile the idl/service/service.proto file and generate the <strong>internal/service/service.pb.go</strong> and the <strong>internal/service/service_grpc.pb.go</strong> files containing the Go implementation.</p>
<pre><code class="lang-bash">
    protoc --proto_path=idl --go_out=internal --go_opt=paths=source_relative --go-grpc_out=internal --go-grpc_opt=paths=source_relative service/service.proto
</code></pre>
<h3 id="heading-implement-the-server-in-go">Implement the server in Go</h3>
<p>Create a Go file named <strong>cmd/server/main.go</strong> and implement the server.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"context"</span>
    <span class="hljs-string">"flag"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net"</span>
    <span class="hljs-string">"time"</span>

    <span class="hljs-string">"google.golang.org/grpc"</span>

    <span class="hljs-string">"go-grpc/internal/service"</span>
)

<span class="hljs-keyword">type</span> server <span class="hljs-keyword">struct</span> {
    service.UnimplementedGreeterServer
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">SayHello</span><span class="hljs-params">(ctx context.Context, in *service.HelloRequest)</span> <span class="hljs-params">(*service.HelloReply, error)</span></span> {
    log.Printf(<span class="hljs-string">"Received: %v"</span>, in.GetName())
    <span class="hljs-keyword">return</span> &amp;service.HelloReply{Message: <span class="hljs-string">"Hello "</span> + in.GetName()}, <span class="hljs-literal">nil</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *server)</span> <span class="hljs-title">Subscribe</span><span class="hljs-params">(c *service.Subscription, stream service.Greeter_SubscribeServer)</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-keyword">for</span> {
        stream.Send(&amp;service.Notification{Message: <span class="hljs-string">"notification"</span>})
        time.Sleep(time.Second * <span class="hljs-number">3</span>)
    }
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    flag.Parse()
    lis, err := net.Listen(<span class="hljs-string">"tcp"</span>, <span class="hljs-string">":50051"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"failed to listen: %v"</span>, err)
    }
    s := grpc.NewServer()
    service.RegisterGreeterServer(s, &amp;server{})
    log.Printf(<span class="hljs-string">"server listening at %v"</span>, lis.Addr())
    <span class="hljs-keyword">if</span> err := s.Serve(lis); err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"failed to serve: %v"</span>, err)
    }
}
</code></pre>
<p>Now that we have the server implementation ready, we can compile it.</p>
<pre><code class="lang-bash">
    go build -o bin/server cmd/server/main.go
</code></pre>
<h3 id="heading-implement-the-client-in-go">Implement the client in Go</h3>
<p>Create a Go file named <strong>cmd/client/main.go</strong> and implement the client.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bufio"</span>
    <span class="hljs-string">"context"</span>
    <span class="hljs-string">"io"</span>
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"time"</span>

    <span class="hljs-string">"google.golang.org/grpc"</span>
    <span class="hljs-string">"google.golang.org/grpc/credentials/insecure"</span>

    <span class="hljs-string">"go-grpc/internal/service"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">sayHello</span><span class="hljs-params">(c service.GreeterClient, name <span class="hljs-keyword">string</span>)</span></span> {
    ctx, cancel := context.WithTimeout(context.Background(), time.Second)
    <span class="hljs-keyword">defer</span> cancel()
    r, err := c.SayHello(ctx, &amp;service.HelloRequest{Name: name})
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"could not greet: %v"</span>, err)
    }
    log.Printf(<span class="hljs-string">"Greeting: %s"</span>, r.GetMessage())
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">subscribe</span><span class="hljs-params">(c service.GreeterClient)</span></span> {
    ctx := context.Background()
    stream, err := c.Subscribe(ctx, &amp;service.Subscription{Address: <span class="hljs-string">"client"</span>})
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"could not subscribe: %v"</span>, err)
    }
    <span class="hljs-keyword">for</span> {
        notification, err := stream.Recv()
        <span class="hljs-keyword">if</span> err == io.EOF {
            <span class="hljs-keyword">break</span>
        }
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            log.Fatalf(<span class="hljs-string">"%v.Subscribe(_) = _, %v"</span>, c, err)
        }
        log.Println(notification)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    conn, err := grpc.Dial(<span class="hljs-string">":50051"</span>, grpc.WithTransportCredentials(insecure.NewCredentials()))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"did not connect: %v"</span>, err)
    }
    <span class="hljs-keyword">defer</span> conn.Close()
    c := service.NewGreeterClient(conn)
    <span class="hljs-keyword">go</span> subscribe(c)

    reader := bufio.NewReader(os.Stdin)
    <span class="hljs-keyword">for</span> {
        text, _ := reader.ReadString(<span class="hljs-string">'\n'</span>)

        sayHello(c, text)
    }
}
</code></pre>
<p>We now compile the client to generate the corresponding executable.</p>
<pre><code class="lang-bash">
    go build -o bin/client cmd/client/main.go
</code></pre>
<h3 id="heading-run-the-example">Run the example</h3>
<p>Once we have the server and client executables ready, we can run them to test the example. We start the <strong>server</strong> in one terminal.</p>
<pre><code class="lang-bash">
    &gt;./bin/server
    ./bin/server
    2023/08/01 11:07:51 server listening at [::]:50051
</code></pre>
<p>And the <strong>client</strong> in another one. We'll start seeing messages coming up on the client terminal. And if we type a string on the client terminal, we will see the reply from the server.</p>
<pre><code class="lang-bash">
    &gt;./bin/client
    ./bin/client
    2023/08/01 11:08:01 message:<span class="hljs-string">"notification"</span>
    2023/08/01 11:08:04 message:<span class="hljs-string">"notification"</span>
    2023/08/01 11:08:07 message:<span class="hljs-string">"notification"</span>
    John
    2023/08/01 11:08:08 Greeting: Hello John
    2023/08/01 11:08:10 message:<span class="hljs-string">"notification"</span>
</code></pre>
<p>We can also check on the <strong>server</strong> terminal that the message from the client was received.</p>
<pre><code class="lang-bash">
    &gt;./bin/server
    ./bin/server
    2023/08/01 11:07:51 server listening at [::]:50051
    2023/08/01 11:08:08 Received: John
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>gRPC is a game-changer in the world of distributed systems, providing a fast, efficient, and flexible communication framework. In this article, we explored the key features and advantages of gRPC and demonstrated a practical example using Go. As you venture into building distributed systems, consider integrating gRPC to experience the full potential of this cutting-edge technology.</p>
]]></content:encoded></item><item><title><![CDATA[How to use Go profiling tools]]></title><description><![CDATA[Profiling is the process of analyzing an application's runtime behavior to pinpoint performance issues. It helps developers identify where their code spends most of its time and how memory is allocated and released. Profiling tools for Go offer inval...]]></description><link>https://textmode.dev/how-to-use-go-profiling-tools</link><guid isPermaLink="true">https://textmode.dev/how-to-use-go-profiling-tools</guid><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Sat, 23 Sep 2023 08:30:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695459445968/fcfb2e42-e4f5-4987-90ca-c943855e9cf3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Profiling is the process of analyzing an application's runtime behavior to pinpoint performance issues. It helps developers identify where their code spends most of its time and how memory is allocated and released. Profiling tools for Go offer invaluable information for making informed optimizations and delivering a smooth and efficient user experience.</p>
<h2 id="heading-key-types-of-profiling-in-go">Key Types of Profiling in Go</h2>
<p><strong>CPU Profiling</strong>: CPU profiling identifies areas of your code that consume the most CPU time. It helps you understand which functions or methods are taking up the majority of the processing power, allowing you to focus on optimizing those areas.</p>
<p><strong>Memory Profiling</strong>: Memory profiling reveals memory allocation patterns and potential memory leaks within your application. It provides insights into how your application uses memory and assists in preventing excessive memory consumption.</p>
<p><strong>Goroutine Profiling</strong>: Goroutines are a hallmark of Go's concurrency model. Goroutine profiling aids in understanding how goroutines are being created, executed, and blocked, helping you optimize concurrent execution and resource utilization.</p>
<h2 id="heading-popular-profiling-tools">Popular Profiling Tools</h2>
<p><strong>pprof</strong>: The built-in pprof package is a cornerstone of Go's profiling capabilities. It offers both CPU and memory profiling, along with a web-based visualization tool. By simply importing the net/http/pprof package and exposing endpoints, you can access valuable insights about your application's performance.</p>
<p><strong>go-torch</strong>: This tool creates flame graphs from pprof profiles, providing a visual representation of the CPU consumption hierarchy. Flame graphs make it easier to identify hotspots and optimize critical sections of your code.</p>
<p><strong>pprof-utils</strong>: This collection of utilities extends the pprof toolset by offering additional functionalities, such as memory profiling for long-running applications and custom visualizations.</p>
<p><strong>Heapster</strong>: Heapster is designed specifically for memory profiling in Go. It offers memory allocation stack traces, allowing you to identify memory-hungry sections of your code and optimize memory usage effectively.</p>
<p><strong>trace</strong>: The trace package in the Go standard library facilitates tracing the execution of programs. It helps you understand the timing and interactions between goroutines, system calls, and application logic, aiding in fine-tuning concurrency.</p>
<h2 id="heading-using-profiling-tools-in-practice">Using Profiling Tools in Practice</h2>
<p>We are going to use a simple example to exercise the basic profiling tools that are part of the Go standard utilities. This sample will have two variants, one of them will allocate an array of bytes in a loop to send them to a receiving goroutine. Meanwhile, the other variant will take advantage of a BytePool to reuse the required array of bytes, instead of allocating new ones in each iteration of the loop.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
        <span class="hljs-string">"os"</span>
)

<span class="hljs-keyword">const</span> REP <span class="hljs-keyword">int</span> = <span class="hljs-number">10000000</span>

<span class="hljs-keyword">type</span> BytePool <span class="hljs-keyword">struct</span> {
    pool  <span class="hljs-keyword">chan</span> []<span class="hljs-keyword">byte</span>
    width <span class="hljs-keyword">int</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">New</span><span class="hljs-params">(poolSize <span class="hljs-keyword">int</span>, bufferWidth <span class="hljs-keyword">int</span>)</span> *<span class="hljs-title">BytePool</span></span> {
    <span class="hljs-keyword">return</span> &amp;BytePool{
        pool:  <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> []<span class="hljs-keyword">byte</span>, poolSize),
        width: bufferWidth,
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(bp *BytePool)</span> <span class="hljs-title">Get</span><span class="hljs-params">()</span> <span class="hljs-params">(b []<span class="hljs-keyword">byte</span>)</span></span> {
    <span class="hljs-keyword">select</span> {
    <span class="hljs-keyword">case</span> b = &lt;-bp.pool:
    <span class="hljs-keyword">default</span>:
        b = <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, bp.width)
    }
    <span class="hljs-keyword">return</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(bp *BytePool)</span> <span class="hljs-title">Put</span><span class="hljs-params">(b []<span class="hljs-keyword">byte</span>)</span></span> {
    <span class="hljs-keyword">select</span> {
    <span class="hljs-keyword">case</span> bp.pool &lt;- b:
    <span class="hljs-keyword">default</span>:
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">ProcessPool</span><span class="hljs-params">()</span></span> {
    channel := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> []<span class="hljs-keyword">byte</span>)
    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        pool := New(REP, <span class="hljs-number">5</span>)
        <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; REP; i++ {
            data := pool.Get()
            channel &lt;- data
            pool.Put(data)
        }
    }()

    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; REP; i++ {
        _ = &lt;-channel
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">Process</span><span class="hljs-params">()</span></span> {
    channel := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> []<span class="hljs-keyword">byte</span>)
    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; REP; i++ {
            data := []<span class="hljs-keyword">byte</span>(<span class="hljs-string">"hello"</span>)
            channel &lt;- data
        }
    }()

    <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; REP; i++ {
        _ = &lt;-channel
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
        arguments := os.Args

        <span class="hljs-keyword">if</span> (arguments[<span class="hljs-number">1</span>] == <span class="hljs-string">"process"</span>) {
                Process()
        } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (arguments[<span class="hljs-number">1</span>] == <span class="hljs-string">"processPool"</span>) {
                ProcessPool()
        }
}
</code></pre>
<h3 id="heading-garbage-collection">Garbage Collection</h3>
<p>In Go, the <strong>GODEBUG</strong> environment variable allows you to enable various debugging options and runtime parameters. One of the features you can enable using GODEBUG is the Garbage Collection (GC) tracing, which is referred to as gctrace. Garbage collection is an essential process in Go that manages memory by identifying and reclaiming unused memory occupied by objects that are no longer reachable.</p>
<p>The <strong>gctrace</strong> option provides insight into the behavior of the Go garbage collector. It outputs information about garbage collection events, pause durations, and memory usage during the garbage collection process. This information can be incredibly useful for understanding how the garbage collector operates, diagnosing potential performance issues, and optimizing your Go programs.</p>
<p>To enable the gctrace option, set the GODEBUG environment variable to include <strong>gctrace=1</strong>. You can do this before running your Go program from the command line.</p>
<p>Let's run the sample variant <strong>without</strong> the BytePool to check how many garbage collection cycles it goes through.</p>
<pre><code class="lang-bash">
GODEBUG=gctrace=1 go run main.go process
gc 1 @0.071s 0%: 0.043+0.31+0.022 ms clock, 0.34+0.073/0.27/0.31+0.18 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.080s 0%: 0.022+0.68+0.002 ms clock, 0.17+0.38/0.62/0.008+0.022 ms cpu, 3-&gt;4-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 3 @0.083s 0%: 0.034+0.66+0.028 ms clock, 0.27+0/0.70/0.37+0.22 ms cpu, 3-&gt;3-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 4 @0.089s 0%: 0.010+0.37+0.002 ms clock, 0.081+0.19/0.43/0.22+0.018 ms cpu, 3-&gt;4-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 5 @0.091s 0%: 0.11+0.59+0.11 ms clock, 0.88+0.10/0.51/0.14+0.92 ms cpu, 3-&gt;4-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 6 @0.093s 0%: 0.008+0.33+0.002 ms clock, 0.070+0.089/0.32/0.34+0.020 ms cpu, 3-&gt;3-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P<span class="hljs-comment"># command-line-arguments</span>
gc 1 @0.002s 5%: 0.050+0.63+0.024 ms clock, 0.40+0.16/0.96/0.42+0.19 ms cpu, 3-&gt;4-&gt;2 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.006s 6%: 0.044+1.2+0.036 ms clock, 0.35+0.49/1.4/0.53+0.29 ms cpu, 6-&gt;6-&gt;4 MB, 6 MB goal, 0 MB stacks, 0 MB globals, 8 P

<span class="hljs-comment"># command-line-arguments</span>
gc 1 @0.001s 6%: 0.018+0.54+0.022 ms clock, 0.14+0.27/0.34/0.56+0.17 ms cpu, 4-&gt;6-&gt;5 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.003s 5%: 0.008+0.97+0.037 ms clock, 0.070+0.051/0.40/0.72+0.29 ms cpu, 9-&gt;9-&gt;9 MB, 11 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 1 @0.111s 0%: 0.019+0.23+0.079 ms clock, 0.15+0.052/0.041/0+0.63 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.227s 0%: 0.14+0.20+0.064 ms clock, 1.1+0.031/0.10/0+0.51 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 3 @0.348s 0%: 0.091+0.23+0.022 ms clock, 0.72+0.032/0.085/0.030+0.17 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 4 @0.466s 0%: 0.048+0.29+0.040 ms clock, 0.38+0.030/0.044/0.024+0.32 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 5 @0.551s 0%: 0.078+0.20+0.032 ms clock, 0.62+0.027/0.074/0.016+0.25 ms cpu, 2-&gt;2-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 6 @0.665s 0%: 0.074+0.36+0.082 ms clock, 0.59+0.032/0.10/0+0.66 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 7 @0.752s 0%: 0.041+0.36+0.079 ms clock, 0.33+0.028/0.063/0.016+0.63 ms cpu, 2-&gt;2-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 8 @0.866s 0%: 0.14+0.50+0.072 ms clock, 1.1+0.062/0.060/0+0.57 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 9 @0.953s 0%: 0.066+0.27+0.089 ms clock, 0.52+0.028/0.055/0+0.71 ms cpu, 2-&gt;2-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 10 @1.069s 0%: 0.076+0.36+0.023 ms clock, 0.61+0.041/0.049/0+0.18 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 11 @1.185s 0%: 0.074+0.30+0.083 ms clock, 0.59+0.036/0.050/0+0.66 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 12 @1.301s 0%: 0.077+0.26+0.035 ms clock, 0.61+0.028/0.075/0.040+0.28 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 13 @1.386s 0%: 0.087+0.38+0.075 ms clock, 0.69+0.030/0.049/0+0.60 ms cpu, 2-&gt;2-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 14 @1.503s 0%: 0.081+0.20+0.002 ms clock, 0.65+0.068/0.045/0+0.016 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
</code></pre>
<p>Now, we can compare those results <strong>with</strong> the BytePool variant.</p>
<pre><code class="lang-bash">
GODEBUG=gctrace=1 go run main.go processPool
gc 1 @0.053s 0%: 0.080+0.24+0.023 ms clock, 0.64+0.039/0.28/0.36+0.19 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.060s 0%: 0.038+0.74+0.002 ms clock, 0.31+0.38/0.79/0.37+0.022 ms cpu, 3-&gt;4-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 3 @0.062s 0%: 0.031+0.49+0.027 ms clock, 0.25+0.15/0.46/0.48+0.21 ms cpu, 3-&gt;3-&gt;0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 4 @0.066s 0%: 0.010+0.35+0.002 ms clock, 0.084+0.040/0.39/0.33+0.023 ms cpu, 3-&gt;3-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 5 @0.068s 0%: 0.008+0.35+0.026 ms clock, 0.065+0.052/0.47/0.37+0.21 ms cpu, 3-&gt;3-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 6 @0.070s 1%: 0.010+0.46+0.022 ms clock, 0.083+0.040/0.35/0.45+0.18 ms cpu, 3-&gt;4-&gt;1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
<span class="hljs-comment"># command-line-arguments</span>
gc 1 @0.002s 5%: 0.050+0.63+0.024 ms clock, 0.40+0.16/0.96/0.42+0.19 ms cpu, 3-&gt;4-&gt;2 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.006s 6%: 0.044+1.2+0.036 ms clock, 0.35+0.49/1.4/0.53+0.29 ms cpu, 6-&gt;6-&gt;4 MB, 6 MB goal, 0 MB stacks, 0 MB globals, 8 P
<span class="hljs-comment"># command-line-arguments</span>
gc 1 @0.001s 5%: 0.006+0.45+0.003 ms clock, 0.054+0.25/0.43/0.21+0.024 ms cpu, 4-&gt;5-&gt;5 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.002s 7%: 0.007+0.76+0.025 ms clock, 0.063+0.048/0.99/0.35+0.20 ms cpu, 9-&gt;9-&gt;9 MB, 11 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 1 @0.018s 13%: 0.009+11+0.003 ms clock, 0.075+9.3/21/45+0.030 ms cpu, 229-&gt;229-&gt;228 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
</code></pre>
<p>Here we can see how the number of garbage collection cycles in the BytePool variant is far <strong>lower</strong> than in the regular version of the program.</p>
<h3 id="heading-heap-escape-analysis">Heap escape analysis</h3>
<p>Escape analysis is a critical optimization technique used by the Go compiler to determine the lifetime of variables and whether they should be allocated on the stack or the heap. The main goal of escape analysis is to minimize the allocation and deallocation of memory, resulting in more efficient memory usage and improved performance in Go programs.</p>
<p>In Go, memory allocation on the heap involves more overhead compared to stack allocation. Variables allocated on the stack have faster access times and are automatically deallocated when they go out of scope, whereas heap-allocated variables require the Garbage Collector to manage their lifecycle, which can introduce potential performance overhead.</p>
<p>Escape analysis helps the compiler make informed decisions about whether a variable's reference escapes its immediate scope, thus necessitating heap allocation. The analysis identifies scenarios where a variable's reference is stored outside the current function's scope, such as:</p>
<p><strong>Returning Pointers</strong>: If a function returns a pointer to a local variable, the variable must be allocated on the heap to ensure its data remains valid even after the function exits.</p>
<p><strong>Storing Pointers</strong>: If a pointer to a local variable is stored in a data structure or globally accessible variable, the variable's data must be kept on the heap.</p>
<p><strong>Goroutine Communication</strong>: If a variable's reference is passed to a goroutine (concurrent function), it must be accessible from the heap since the goroutine's lifetime extends beyond the current function.</p>
<p><strong>Function Parameters</strong>: If a variable's reference is passed as a parameter to another function and that function retains the reference, the variable's data must reside on the heap.</p>
<p>We can perform a heap escape analysis of a program by using the <strong>-gcflags</strong> argument with the option '<strong>-m</strong>'. Let's run the sample version using these options to generate the heap escape analysis of the program.</p>
<pre><code class="lang-bash">
go run -gcflags <span class="hljs-string">'-m'</span> main.go process
<span class="hljs-comment"># command-line-arguments</span>
./main.go:14:6: can inline New
./main.go:21:6: can inline (*BytePool).Get
./main.go:30:6: can inline (*BytePool).Put
./main.go:39:5: can inline ProcessPool.func1
./main.go:40:14: inlining call to New
./main.go:42:20: inlining call to (*BytePool).Get
./main.go:44:12: inlining call to (*BytePool).Put
./main.go:55:5: can inline Process.func1
./main.go:15:9: &amp;BytePool{...} escapes to heap
./main.go:21:7: bp does not escape
./main.go:25:11: make([]byte, bp.width) escapes to heap
./main.go:30:7: bp does not escape
./main.go:30:25: leaking param: b
./main.go:39:5: func literal escapes to heap
./main.go:40:14: &amp;BytePool{...} does not escape
./main.go:42:20: make([]byte, bp.width) escapes to heap
./main.go:55:5: func literal escapes to heap
./main.go:57:18: ([]byte)(<span class="hljs-string">"hello"</span>) escapes to heap
</code></pre>
<h3 id="heading-memory-benchmark">Memory benchmark</h3>
<p>The <strong>benchmem</strong> option is a feature provided by the built-in testing package (<strong>testing</strong>) that allows you to measure memory allocation and deallocation behavior during benchmark tests. Benchmark tests in Go are used to evaluate the performance of code snippets, functions, or packages by running them repeatedly and measuring execution time. The benchmem option extends this capability to include memory-related metrics, providing insights into memory usage patterns.</p>
<p>When you run benchmark tests with the benchmem option enabled, the Go testing framework not only measures execution time but also collects information about memory allocations, memory deallocations, and memory allocations per operation.</p>
<p>We need first to code a test for the sample program. It will contain methods for testing and benchmarking both versions of the program.</p>
<pre><code class="lang-go">
<span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> <span class="hljs-string">"testing"</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestProcess</span><span class="hljs-params">(t *testing.T)</span></span>{
    Process()
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkProcess</span><span class="hljs-params">(b *testing.B)</span></span>{
        <span class="hljs-keyword">for</span> n := <span class="hljs-number">0</span>; n &lt; b.N; n++ {
        Process()
        }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">TestProcessPool</span><span class="hljs-params">(t *testing.T)</span></span>{
    ProcessPool()
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">BenchmarkProcessPool</span><span class="hljs-params">(b *testing.B)</span></span>{
        <span class="hljs-keyword">for</span> n := <span class="hljs-number">0</span>; n &lt; b.N; n++ {
        ProcessPool()
        }
}
</code></pre>
<p>Now we can exercise the sample program with the benchmem option to compare the performance of both versions of the program.</p>
<pre><code class="lang-bash">
go <span class="hljs-built_in">test</span> -bench . -benchmem
goos: linux
goarch: amd64
pkg: go-profiling-tools
cpu: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
BenchmarkProcess-8                     1        1585461568 ns/op        53334224 B/op   10000006 allocs/op
BenchmarkProcessPool-8                 1        2078355011 ns/op        240001664 B/op         6 allocs/op
PASS
ok      go-profiling-tools      7.252s
</code></pre>
<p>The output includes memory-related information for each benchmark iteration.</p>
<ul>
<li><p><strong>allocs/op</strong>: The average number of memory allocations per benchmark iteration.</p>
</li>
<li><p><strong>alloced/op</strong>: The average amount of memory allocated per benchmark iteration.</p>
</li>
<li><p><strong>bytes/op</strong>: The average amount of memory allocated per operation (operation in the benchmark code).</p>
</li>
<li><p><strong>mallocs/op</strong>: The number of heap allocations (mallocs) per benchmark iteration.</p>
</li>
<li><p><strong>frees/op</strong>: The number of heap deallocations (frees) per benchmark iteration.</p>
</li>
</ul>
<h3 id="heading-cpu-profiling">CPU profiling</h3>
<p>The <strong>cpuprofile</strong> is a feature provided by the built-in <strong>pprof</strong> package that allows you to profile the CPU usage of your program. Profiling is the process of analyzing a program's runtime behavior to identify performance bottlenecks and areas for optimization. The cpuprofile feature is particularly useful for understanding how much time your program spends on different functions and methods, helping you pinpoint areas that might benefit from optimization.</p>
<p>The pprof package provides HTTP endpoints that allow you to navigate through the resulting html files using a web browser. However, we are going to use the options required for generating static reports, so that we can integrate the CPU profiling as a step of a CI/CD pipeline. To do so, we need to install the graphviz package to be able to generate the resulting SVG graphics.</p>
<pre><code class="lang-bash">
sudo apt install graphviz
</code></pre>
<p>The CPU profile generated using the cpuprofile feature provides insights into the call stack and the amount of time spent in each function or method. This information can help you identify performance hotspots and optimize critical sections of your code.</p>
<pre><code class="lang-bash">
go <span class="hljs-built_in">test</span> -cpuprofile cpu.out &amp;&amp; go tool pprof -svg -output=cpu.out.svg cpu.out
PASS
ok      go-profiling-tools      3.917s
Generating report <span class="hljs-keyword">in</span> cpu.out.svg
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695457301530/b851d9d0-63f5-4111-b668-deb23b6e7199.png" alt="cpu.out.svg" class="image--center mx-auto" /></p>
<p>It's worth noting that profiling does introduce some overhead, so it's recommended to use it selectively in performance-critical scenarios. Additionally, the profiling endpoints should not be exposed in production environments due to potential security risks.</p>
<h3 id="heading-ram-profiling">RAM profiling</h3>
<p>The memory profiling feature allows you to monitor and analyze the memory usage and allocation patterns of your program over time. This is accomplished through the generation of memory profiles, which provide insights into how memory is being allocated and deallocated by your Go application. Memory profiling can help you identify memory leaks, inefficient memory usage, and opportunities for optimization.</p>
<p>The Go runtime includes a built-in memory profiler that can be triggered during the execution of your program. This profiler generates memory profiles that you can later analyze using various tools to gain a better understanding of your program's memory behavior.</p>
<p>Similarly to the CPU profiling, we are going to generate a static report that could be integrated in a CI/CD pipeline.</p>
<pre><code class="lang-bash">
go <span class="hljs-built_in">test</span> -test.memprofilerate=1 -memprofile mem.out &amp;&amp; go tool pprof -svg -output=mem.out.svg mem.out
PASS
ok      go-profiling-tools      8.389s
Generating report <span class="hljs-keyword">in</span> mem.out.svg
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695457340764/2214c6a4-1b92-4636-9ad8-0ffd78ba8e66.png" alt="mem.out.svg" class="image--center mx-auto" /></p>
<h3 id="heading-coverage-profiling">Coverage profiling</h3>
<p>Coverage profiling is not related to performance. However, it is a really useful feature of the Go toolset. The <strong>coverprofile</strong> feature is used to generate code coverage reports during the execution of your tests. Code coverage measures the extent to which your tests exercise the different parts of your codebase. It helps you understand which portions of your code are being tested and which areas might need additional test coverage.</p>
<p>By using the coverprofile flag during the go test command, you can generate a coverage profile that shows which lines of code were executed by your tests. This profile can then be used to generate detailed coverage reports that indicate which parts of your code were covered and which were not.</p>
<pre><code class="lang-bash">
go <span class="hljs-built_in">test</span> -coverprofile=coverage.out &amp;&amp; go tool cover -html=coverage.out -o coverage.out.html
PASS
coverage: 80.8% of statements
ok      go-profiling-tools      3.948s
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695457360811/691ec185-49ac-46d9-9ab0-1be3fbde78cd.png" alt="coverage.out.html" class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Profiling tools are indispensable assets for any Go developer aiming to create performant applications. By harnessing the capabilities of CPU, memory, and goroutine profiling, you can unlock the full potential of Go's efficiency and concurrency model. Whether you're building microservices, web applications, or system-level software, profiling tools provide the roadmap to optimizing performance, minimizing resource consumption, and delivering exceptional user experiences. So, embrace the power of profiling tools and embark on a journey toward crafting high-performance Go applications that stand out in today's competitive software landscape.</p>
]]></content:encoded></item><item><title><![CDATA[How to create a systemd file exchange service]]></title><description><![CDATA[In this article, we are going to go over an example for defining a systemd service that will allow the user to exchange files with another machine. It will work by specifying a couple of folders for sending and receiving files. These files will be ex...]]></description><link>https://textmode.dev/how-to-create-a-systemd-file-exchange-service</link><guid isPermaLink="true">https://textmode.dev/how-to-create-a-systemd-file-exchange-service</guid><category><![CDATA[Bash]]></category><category><![CDATA[systemd]]></category><category><![CDATA[ftp]]></category><category><![CDATA[inotifywait]]></category><category><![CDATA[ logger]]></category><dc:creator><![CDATA[Alvaro Leal]]></dc:creator><pubDate>Wed, 16 Aug 2023 18:03:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695459467464/7555cef1-b2aa-4feb-aee5-9475150a6ade.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we are going to go over an example for defining a systemd service that will allow the user to exchange files with another machine. It will work by specifying a couple of folders for sending and receiving files. These files will be exchanged using the File Transfer Protocol, commonly known as FTP, which is a standard network protocol used for transferring files from one host to another over a TCP-based network, such as the Internet. An FTP service serves as an intermediary platform that facilitates the seamless movement of files, providing a reliable and structured way to exchange data.</p>
<h2 id="heading-installation">Installation</h2>
<p>The first step will consist of installing the dependencies required for executing the service. These will be <strong>curl</strong> for sending the file using FTP, and <strong>pure-ftpd</strong> as FTP server.</p>
<pre><code class="lang-bash">sudo apt install curl
sudo apt install pure-ftpd
sudo apt install inotify-tools
</code></pre>
<h2 id="heading-send-service">Send service</h2>
<p>The send service will consist of a <a target="_blank" href="http://send.sh"><strong>send.sh</strong></a> script, which takes five arguments:</p>
<ul>
<li><p><strong>out_tray</strong>: this is the folder where the user has to put the files to be sent to the destination host.</p>
</li>
<li><p><strong>ftp_user</strong>: user used for the FTP connection.</p>
</li>
<li><p><strong>ftp_password</strong>: password for the user of the FTP connection.</p>
</li>
<li><p><strong>ftp_host</strong>: destination host for the FTP transfer.</p>
</li>
<li><p><strong>ftp_folder</strong>: folder within the destination host where the files are going to be copied.</p>
</li>
</ul>
<p>This script monitors the out_tray folder checking for new files using the <strong>inotifywait</strong> utility. Once a new file is detected, it sends the file to the FTP destination using <strong>curl</strong>. It also uses the logger utility to print traces about the execution of the script.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$#</span> -ne 5 ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Usage: send.sh &lt;out_tray&gt; &lt;ftp_user&gt; &lt;ftp_password&gt; &lt;ftp_ip&gt; &lt;ftp_folder&gt;"</span>;
  <span class="hljs-built_in">exit</span>;
<span class="hljs-keyword">fi</span>

<span class="hljs-keyword">while</span> file=<span class="hljs-string">"<span class="hljs-subst">$(inotifywait -q -e move --format %f $1)</span>"</span>;
<span class="hljs-keyword">do</span>
  logger <span class="hljs-string">"[`basename "</span><span class="hljs-variable">$0</span><span class="hljs-string">"`] Sending file <span class="hljs-variable">$file</span>"</span>
  curl -T <span class="hljs-variable">$1</span>/<span class="hljs-variable">$file</span> ftp://<span class="hljs-variable">$4</span>/<span class="hljs-variable">$5</span>/<span class="hljs-variable">$file</span> --user <span class="hljs-variable">$2</span>:<span class="hljs-variable">$3</span>
  rm <span class="hljs-variable">$1</span>/<span class="hljs-variable">$file</span>;
  logger <span class="hljs-string">"[`basename "</span><span class="hljs-variable">$0</span><span class="hljs-string">"`] Sent file <span class="hljs-variable">$file</span>"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>The next step is to define a <strong>systemd</strong> service that will call the <a target="_blank" href="http://send.sh">send.sh</a> script.</p>
<pre><code class="lang-bash">[Unit]
Description=Send Service

[Service]
ExecStart=/usr/<span class="hljs-built_in">local</span>/bin/transferor/send.sh /usr/<span class="hljs-built_in">local</span>/bin/transferor/outTray al al 127.0.0.1 /usr/<span class="hljs-built_in">local</span>/bin/transferor/inTray
StandardOutput=null
Restart=on-failure

[Install]
WantedBy=multi-user.target
Alias=send.service
</code></pre>
<h2 id="heading-receive-service">Receive service</h2>
<p>The receiving service will follow a similar approach. It will be composed of the <a target="_blank" href="http://recv.sh"><strong>recv.sh</strong></a> script, which takes two arguments:</p>
<ul>
<li><p><strong>in_tray</strong>: folder where files sent through FTP from the source host are received.</p>
</li>
<li><p><strong>script</strong>: script file to be executed to process the files received in the in_tray folder.</p>
</li>
</ul>
<p>This script also monitors a folder using the <strong>inotifywait</strong> utility. Once a new file appears in the in_tray folder, it runs the script specified as an argument to process the received file. As the <a target="_blank" href="http://send.sh">send.sh</a> did, it uses the logger utility to print traces about the execution of the script.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-keyword">if</span> [ <span class="hljs-variable">$#</span> -ne 2 ]; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Usage: recv.sh &lt;in_tray&gt; &lt;script&gt;"</span>;
  <span class="hljs-built_in">exit</span>;
<span class="hljs-keyword">fi</span>

<span class="hljs-keyword">while</span> file=<span class="hljs-string">"<span class="hljs-subst">$(inotifywait -q -e close_write --format %f $1)</span>"</span>;
<span class="hljs-keyword">do</span>
  logger <span class="hljs-string">"[`basename "</span><span class="hljs-variable">$0</span><span class="hljs-string">"`] Received file <span class="hljs-variable">$file</span>"</span>
  <span class="hljs-built_in">source</span> <span class="hljs-variable">$2</span> <span class="hljs-string">"<span class="hljs-variable">$1</span>/<span class="hljs-variable">$file</span>"</span>;
  logger <span class="hljs-string">"[`basename "</span><span class="hljs-variable">$0</span><span class="hljs-string">"`] Processed file <span class="hljs-variable">$file</span>"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>For example, we can define an <a target="_blank" href="http://echo.sh"><strong>echo.sh</strong></a> processing script that just echoes the content of the received file.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

cat <span class="hljs-variable">$1</span>
</code></pre>
<p>After coding these two scripts, we can define the <strong>systemd</strong> service that will execute the <a target="_blank" href="http://recv.sh">recv.sh</a> script.</p>
<pre><code class="lang-bash">[Unit]
Description=Receive Service

[Service]
ExecStart=/usr/<span class="hljs-built_in">local</span>/bin/transferor/recv.sh /usr/<span class="hljs-built_in">local</span>/bin/transferor/inTray /usr/<span class="hljs-built_in">local</span>/bin/transferor/echo.sh
StandardOutput=null
Restart=on-failure

[Install]
WantedBy=multi-user.target
Alias=recv.service
</code></pre>
<h2 id="heading-deployment">Deployment</h2>
<p>Once we have all the scripts and services defined, we can deploy them to the proper folders and set them up to be run as systemd services.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

TARGET=/usr/<span class="hljs-built_in">local</span>/bin/transferor

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Creating directory <span class="hljs-variable">$TARGET</span>"</span>

sudo mkdir <span class="hljs-variable">$TARGET</span>
sudo mkdir <span class="hljs-variable">$TARGET</span>/inTray
sudo chown al:al <span class="hljs-variable">$TARGET</span>/inTray
sudo mkdir <span class="hljs-variable">$TARGET</span>/outTray
sudo chown al:al <span class="hljs-variable">$TARGET</span>/outTray

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Copying files into <span class="hljs-variable">$TARGET</span>"</span>
sudo cp app/echo.sh <span class="hljs-variable">$TARGET</span>
sudo cp recv/recv.sh <span class="hljs-variable">$TARGET</span>
sudo cp recv/recv.service <span class="hljs-variable">$TARGET</span>
sudo cp send/send.sh <span class="hljs-variable">$TARGET</span>
sudo cp send/send.service <span class="hljs-variable">$TARGET</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Installing services"</span>
sudo ln -s <span class="hljs-variable">$TARGET</span>/recv.service /etc/systemd/system
sudo systemctl <span class="hljs-built_in">enable</span> recv
sudo ln -s <span class="hljs-variable">$TARGET</span>/send.service /etc/systemd/system
sudo systemctl <span class="hljs-built_in">enable</span> send

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Done"</span>
</code></pre>
<p>After deploying the scripts and services, we have to reboot the machine. And after that, we can check the status of the defined services.</p>
<pre><code class="lang-bash">sudo reboot
sudo systemctl status recv
sudo systemctl status send
</code></pre>
<h2 id="heading-testing-the-example">Testing the example</h2>
<p>At this point, we should have both services up and running. Therefore, we can do a quick test to check that they work as expected. To do so, we just have to copy a test file into the out_tray folder, and we will see the content of that file in the log messages of the logger utility.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"test"</span> &gt; test.txt
sudo mv test.txt /usr/<span class="hljs-built_in">local</span>/bin/transferor/outTray

journalctl -f
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this article, we have seen a simple way of defining a systemd service that allows you to easily send files between hosts, just by copying them to the outbound folder. And allows you to run a script to process the received file in the destination host. That way you could automate the distribution and processing of files among multiple machines. Obviously, this example is not suitable for production code. But it allows you to get a grasp of how basic utilities like <strong>inotifywait</strong>, <strong>logger</strong>, <strong>curl</strong> and the definition of <strong>systemd</strong> services work.</p>
]]></content:encoded></item></channel></rss>