<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[NEUROTECH  AFRICA]]></title><description><![CDATA[Big Data, Machine Learning, Natural language processing, LLMS, Conversational AI, Business  Intelligence ]]></description><link>https://blog.neurotech.africa/</link><generator>Ghost 4.34</generator><lastBuildDate>Mon, 06 Oct 2025 22:39:51 GMT</lastBuildDate><atom:link href="https://blog.neurotech.africa/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Ghala webhook]]></title><description><![CDATA[Ghala delivers real-time event updates, keeping you informed as they happen. By prioritizing automation, it removes the need for manual effort, streamlining workflows and boosting efficiency.]]></description><link>https://blog.neurotech.africa/ghala-webhook/</link><guid isPermaLink="false">677b89b85a0e5405410db1c0</guid><category><![CDATA[ghala]]></category><category><![CDATA[social-commerce]]></category><category><![CDATA[African Businesses]]></category><category><![CDATA[e-commerce]]></category><dc:creator><![CDATA[jovine Mutelani]]></dc:creator><pubDate>Fri, 17 Jan 2025 19:30:35 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2025/01/Screenshot-2025-01-09-at-18.40.51.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2025/01/Screenshot-2025-01-09-at-18.40.51.png" alt="Ghala webhook"><p>Super charge your <a href="https://ghala.co.tz"><strong>Ghala</strong></a> shop by getting latest updates on certain events as they occur on the system. Its simple to listen for specific event and make updates accordingly. This enables shop owners to tap into the power of <strong>Ghala webhook</strong> to connect with any external service.</p><p><strong><a href="https://ghala.tz">Ghala</a></strong>, now lets you listen to some event happens on your &#xA0;shop. Do you have a system that you like to receive event. This feature comes in handy with some security measures to ensure events are easily verified on the receiving end.</p><p>You might ask why have this while i can all do it on the platform, i just need to navigate to the dashboard and get all latest details? Its correct, the platform is already packed with all you need. Ghala already hands you with notifications via either email or SMS only on successful payment but notifications do not provide a way for you/your dev team to automate other tasks such connecting with <em>Logistics handler</em>,</p><p>Ohh, this is the developer looking like feature right? I guess so but whoever has the ability to connect and listen to events from Ghala should benefit &#xA0;from the feature.</p><p>Much of the talk lets get a bit a gist of what it means to have and connect to the webhook service. Lets get into developer mode...</p><h2 id="about-webhook">About Webhook</h2><p>I&apos;m not good at story telling but will try to get the about of webhook in a story fashion, just a short one.</p><p>Ghala Wehbook, works like an alarm. You would set to ring at 0500 to wake you up. When alarm goes off at 0500, you can either <em><strong>snooze</strong></em> or <em><strong>dismiss</strong></em> it [options in my Alarm]. With <em>snoozed</em> alarm, you will keep getting alarms after a particular interval. In case you dismiss it, the alarm never goes off again.. </p><p>The same way Ghala webhook works, when. you decide to listen for a particular event, you will get notified almost immediately then you will have to acknowledge(dismiss). In case you do not acknowledge,(snooze), you will get a request after a defined intervals.</p><h2 id="webhook-events">Webhook Events</h2><p>Currently, we support only 4 kind of events namely <strong>o<em>rder creation, order cancelled, order updated and successful payment.</em></strong></p><p>Lets explore sample request that will be sent to the configured URL.</p><ul><li>Order Created </li></ul><p>	Enum: &#xA0;order.created</p><p>	Sample Request:</p><pre><code class="language-json">{
  &quot;event&quot;: &quot;order.created&quot;,
  &quot;data&quot;: {
    &quot;customer&quot;: {
      &quot;name&quot;: &quot;Sarufi Ghala&quot;,
      &quot;phone&quot;: &quot;255757294146&quot;,
      &quot;email&quot;: &quot;info@ghala.io&quot;
    },
    &quot;order&quot;: {
      &quot;id&quot;: 1,
      &quot;total&quot;: 1000,
      &quot;currency&quot;: &quot;TZS&quot;,
      &quot;products&quot;: [{ &quot;name&quot;: &quot;Product 1&quot;, &quot;price&quot;: 500, &quot;quantity&quot;: 2 }]
    }
  }
}
</code></pre><ul><li>Order Cancelled</li></ul><p>	Enum: order.cancelled</p><p>	Sample Data</p><pre><code class="language-json">{
  &quot;event&quot;: &quot;order.cancelled&quot;,
  &quot;data&quot;: {
    &quot;customer&quot;: {
      &quot;name&quot;: &quot;Sarufi Ghala&quot;,
      &quot;phone&quot;: &quot;255757294146&quot;,
      &quot;email&quot;: &quot;info@ghala.io&quot;
    },
    &quot;order&quot;: {
      &quot;id&quot;: 1,
      &quot;total&quot;: 1000,
      &quot;currency&quot;: &quot;TZS&quot;,
      &quot;products&quot;: [{ &quot;name&quot;: &quot;Product 1&quot;, &quot;price&quot;: 500, &quot;quantity&quot;: 2 }]
    }
  }
}
</code></pre><ul><li>Order Updated</li></ul><p>	Enum: order.updated</p><p>	Sample Data:</p><pre><code class="language-json">{
  &quot;event&quot;: &quot;order.updated&quot;,
  &quot;data&quot;: { &quot;order_id&quot;: 1, &quot;old_status&quot;: &quot;paid&quot;, &quot;new_status&quot;: &quot;picked up&quot; }
}
</code></pre><ul><li>Sucessful payment</li></ul><p>	Enum:</p><p>	Sample Data:</p><pre><code class="language-json">{
  &quot;event&quot;: &quot;payment.success&quot;,
  &quot;daata&quot;: {
    &quot;customer&quot;: {
      &quot;name&quot;: &quot;Sarufi Ghala&quot;,
      &quot;email&quot;: &quot;info@ghala.io&quot;,
      &quot;phone_number&quot;: &quot;255757294146&quot;
    },
    &quot;amount&quot;: 1000,
    &quot;payment_number&quot;: &quot;255757294146&quot;,
    &quot;transaction_id&quot;: 1,
    &quot;order_id&quot;: 1
  }
}</code></pre><p>With Ghala, the events are gracefully retried after the following intervals 2 minutes, 5 minutes, 30 minutes, &#xA0;2 hours, 5 hours and 12 hours. This is when the receiving system <em>snoozes</em> the request from Ghala.</p><h2 id="setting-webhook">Setting Webhook</h2><p>The first thing required is to have a live API that will receive events. Login into your <a href="https://ghala.tz/dashboard">Ghala shop</a>, then navigate to setting then on integration tab. Then fill in all required fields &gt; <strong>Save</strong> or <strong>Test the event</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2025/01/Screenshot-2025-01-09-at-16.34.46.png" class="kg-image" alt="Ghala webhook" loading="lazy" width="2000" height="1382" srcset="https://blog.neurotech.africa/content/images/size/w600/2025/01/Screenshot-2025-01-09-at-16.34.46.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2025/01/Screenshot-2025-01-09-at-16.34.46.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2025/01/Screenshot-2025-01-09-at-16.34.46.png 1600w, https://blog.neurotech.africa/content/images/2025/01/Screenshot-2025-01-09-at-16.34.46.png 2200w" sizes="(min-width: 720px) 720px"><figcaption>Ghala Integration: Webhook</figcaption></figure><p>Keep in mind that only active webhooks will be triggered. After saving you will have a secret key, keep it as it will be used to create signature that will be part of request headers.</p><h2 id="webhook-security">Webhook Security</h2><p>With webhook set, &#xA0;webhook secret key provided. Lets take a look on how to verify webhook events from Ghala. Taking into consideration of concerning nature of events and possible consequences that may occur, Ghala has provided a way to verify events against common attacks.</p><p>To ensure the authenticity of received events from Ghala, the following verification have been put in place:</p><ul><li> Avoiding replay attacks</li><li>Signature validation</li></ul><h3 id="avoid-replay-attacks">Avoid Replay Attacks</h3><p>Inside the webhook header there is an additional field <em><strong>X-Ghala-timestamp</strong></em> to indicate the Unix timestamp &#xA0;when the webhook was signed. &#xA0;This can be used to &#xA0;timestamp to reduce the chance of a <a href="https://en.wikipedia.org/wiki/Replay_attack">replay attack</a>.</p><h3 id="signature-validation">Signature Validation</h3><p>Events sent from our Ghala contain the <em><strong>X-Ghala-Signature</strong></em> header. To ensure the webhook&apos;s authenticity and confirm its origin from our service, &#xA0;we recommend following the following steps to compare a generated signature with the signature from the header.</p><ul><li>Construct the signed content</li></ul><p>	To generate the signed content, concatenate the <em><strong>X-Ghala-timestamp </strong></em>and the raw request body. The raw request body is the JSON payload sent in the request. &#xA0; Use `.` as the separator between the timestamp and the raw request body as string.</p><pre><code class="language-python">headers = request.headers
# 1. Get headers [signature and timestamp]
signature = headers.get(&quot;X-Ghala-Signature&quot;)
timestamp = headers.get(&quot;X-Ghala-timestamp&quot;)

# 2. Get the body
body = await request.body()

# 3. Decode the body
decoded_body = body.decode()

# 4. create a signed payload [do not alter the order]
content = f&quot;{timestamp}.{decoded_body}&quot;

# 5. Encode the payload
 content = content.encode(&quot;utf-8&quot;)</code></pre><blockquote>Make sure to use a raw body as any modification to the body will result in a different signature</blockquote><p></p><ul><li>Generate the expected signature</li></ul><p>	Lets calculate, expected signature &#xA0;you will need use secret key obtained during registration of webhook. &#xA0;You will need to use the secret key and signed content, to generate a HMAC SHA256 signature as bytes.</p><pre><code class="language-python">import hmac
import hashlib
import base64

secret_key=&quot;your_secret_key&quot; # Obtained when registering the webhook
secret_key = secret_key.encode(&apos;utf-8&apos;)

 # Create a new signature from content
 new_signature = hmac.new(secret, content, hashlib.sha256).digest()
 
 # Create a base64 encoded signature
 expected = base64.b64encode(new_signature).decode(&quot;utf-8&quot;).strip()</code></pre><p></p><ul><li>Compare the generated signature with the signature from the header</li></ul><p>	As highlighted above the header will have the signature that was used to sign the request. Compare the generated signature with the signature from the header. </p><p>	If the two signatures match, the request is valid then you can proceed with processing the event. </p><blockquote>Any response status code other than <strong>200, 201</strong> or <strong>202</strong> will be considered as a snoozed alarm so there will be a retry.</blockquote><p>It is recommended to use a constant-time string comparison to prevent any possibility of <a href="https://en.wikipedia.org/wiki/Timing_attack">timing attack</a>.</p><h3 id="sample-code">Sample Code</h3><p>Starter code to get you started with webhook data verification. More eamples will be updated on official ghala docs. Other sample codes will be update as fast as possible to fuel the integration with other systems.</p><pre><code class="language-python">import base64
import hashlib
import hmac

from flask import Flask, request

app = Flask(__name__)

@app.route(&apos;/webhook&apos;, methods=[&apos;POST&apos;])
def webhook(request: Request):
	
    # 1. Construct the signed content
	headers = request.headers
	request_body = request.get_data(as_text=True)
	
    webhook_timestamp = headers.get(&apos;X-Ghala-timestamp&apos;)

    signed_content = f&quot;{str(webhook_timestamp)}.{str(request_body)}&quot;.encode()
    
    # 2. Generate the expected signature
    secret_key=&quot;your_secret_key&quot; # Obtained when registering the webhook
    secret_key = secret_key.encode(&apos;utf-8&apos;)
    
    # New hmac signature
    signature = hmac.new(secret_key, signed_content, hashlib.sha256).digest()
    # Encode the signature to base64
    signature = base64.b64encode(signature).decode()
    
    # 3. Compare the generated signature with the signature from the header
    gahala_signature = headers.get(&quot;X-Ghala-Signature&quot;)
    if not hmac.compare_digest(gahala_signature, signature):
    
    	# Return any   status code that is not among [200,201,202]
        return &quot;Invalid request&quot;, 400
	# Perform your actions with data provided then return status code among [200,201,202]
    # It can be a background task as the request timeout is about 10s
    
    return &quot;Processed&quot;,200</code></pre><p>Its getting that developer vibe , so get ready for a lot of integration support that are currently on the way. We start by the beta realise of Webhook support.. </p><p></p><h2 id="wrap-up">Wrap up</h2><p>With Webhook, Ghala intends to bring developer close to integration with existing systems. More features to come stay alert. </p>]]></content:encoded></item><item><title><![CDATA[Ghala: Sarufi-like Commerce Ready]]></title><description><![CDATA[Ghala brings the solution by focusing on what business would be caring about, automating most of the parts involved in business-Customer interaction.

The Seller just sets up the shop, then would be notified of new order, with order details.]]></description><link>https://blog.neurotech.africa/introducing-sarufi-commerce-compatible/</link><guid isPermaLink="false">64cd0b325a0e5405410d8dcb</guid><category><![CDATA[e-commerce]]></category><category><![CDATA[African Businesses]]></category><dc:creator><![CDATA[jovine Mutelani]]></dc:creator><pubDate>Tue, 19 Nov 2024 12:29:27 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-12.15.22.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-12.15.22.png" alt="Ghala: Sarufi-like Commerce Ready"><p>With social platforms being the major <strong>Go-To</strong> sales platforms for many businesses around Tanzania and other areas around the globe, business owners sometimes find it difficult to manage some of customer interactions leading to loss of potential sales. Why not adapt the automation on same platforms that sellers are already comfortable with, mostly <strong>WhatsApp</strong>. </p><p>That lead to development of <strong><a href="https://sarufi.io">Sarufi</a></strong> to automate customer interaction chatbots. &#xA0;As the famous quote goes<code>If you can&#x2019;t beat them, join them</code>, Sarufi made possible to join the platforms[social platforms] &#xA0;in enhancing customer engagement via chatbot.</p><p>Sarufi being developer centric, most business people found it a bit of heavy lifting task to get something scratched super quickly and integrated. With the increase of users building commerce experience via Sarufi Chatbots, the Sarufi-like commerce focused solution had be to thought of and sketched. &#xA0;Then, here we are with the introduction of &#xA0;a <strong>Beta</strong> twin platform focusing on commerce experience, &#x1F6D2;&#x1F6CD;&#xFE0F; <strong><a href="https://ghala.tz">Ghala</a> </strong>(simplify translates to <em>Store/WareHouse</em>). </p><p>Here is a bit of what you would need to do if you had to use sarufi to build a commerce experience for your shop thing. &#xA0;Its just a very high overview of the tasks that you would have to do manually to get this at least to place order. &#xA0;With Sarufi you will have to create a chatbot, handle some external site integrations and managing user navigation on viewing products from your API and rendering to the chatbot.</p><p>Lets make it easy for business owners to have easiest way of handling their sales as what they care most is how you get them increase their sale on the same platform they are used to. Ghala comes to rescue...</p><h2 id="ghala-brief">Ghala Brief</h2><p><a href="https://ghala.tz"><strong>Ghala</strong></a> is sarufi twin focusing on commerce to streamline the commerce experience that takes place on social platforms. &#xA0;It is made to be commerce friendly to seller and customers, provide customers with best experience. Currently supporting WhatsApp integration.</p><p>What does the business care about?</p><ul><li>Customer experience</li><li>Sales</li><li>Management</li></ul><p>Ghala brings the solution by focusing on what business would be caring about, automating most of the parts involved in business-Customer interaction. &#xA0;Helping business owners manage customer interaction and order processing is the key focus of Ghala.</p><p>The Seller just sets up the shop, then would be notified of new order, with order details. The chatbot underneath is able of responding to most FAQs as provided by the business.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-10.36.14.png" class="kg-image" alt="Ghala: Sarufi-like Commerce Ready" loading="lazy" width="938" height="1236" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/11/Screenshot-2024-11-01-at-10.36.14.png 600w, https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-10.36.14.png 938w" sizes="(min-width: 720px) 720px"></figure><p></p><p>Ohh wait, isn&apos;t that whatsApp business you might ask &#x1F914;? Oukay, clarification below.</p><p>Ghala utilizes WhatsApp business features, but providing more automation such as payment collection, order management. Currently, using WhatsApp business App you can not automatically collect payment with exception of few countries. Ghala focuses on making the progress on automation via socials faster especially around Africa. You can read a bit on the difference between <a href="https://blog.neurotech.africa/know-the-difference-between-whatsapp-app-and-api"><strong>WhatsApp App vs Cloud API</strong></a>. </p><h3 id="dashboard">Dashboard.</h3><p>As highlighted on the above image, you would expect the dashboard to allow easy management. The dashboard is intended to be as simple as possible to use.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-10.38.03.png" class="kg-image" alt="Ghala: Sarufi-like Commerce Ready" loading="lazy" width="2000" height="1173" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/11/Screenshot-2024-11-01-at-10.38.03.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/11/Screenshot-2024-11-01-at-10.38.03.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/11/Screenshot-2024-11-01-at-10.38.03.png 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/11/Screenshot-2024-11-01-at-10.38.03.png 2400w" sizes="(min-width: 720px) 720px"></figure><p></p><p>I know the question in mind is like, ohh wait i want to see a bit of customer interaction kiddooo... &#x1F914; , Got a video for you here. </p><figure class="kg-card kg-video-card kg-card-hascaption"><div class="kg-video-container"><video src="https://blog.neurotech.africa/content/media/2024/11/BAFREDO-Electronics-How-to-Buy-Vid--1---1---1---1-.mp4" poster="https://img.spacergif.org/v1/2250x2250/0a/spacer.png" width="2250" height="2250" playsinline preload="metadata" style="background: transparent url(&apos;https://blog.neurotech.africa/content/images/2024/11/Screenshot-2024-11-01-at-13.29.13.png&apos;) 50% 50% / cover no-repeat;"></video><div class="kg-video-overlay"><button class="kg-video-large-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button></div><div class="kg-video-player-container"><div class="kg-video-player"><button class="kg-video-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button><button class="kg-video-pause-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/><rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/></svg></button><span class="kg-video-current-time">0:00</span><div class="kg-video-time">/<span class="kg-video-duration"></span></div><input type="range" class="kg-video-seek-slider" max="100" value="0"><button class="kg-video-playback-rate">1&#xD7;</button><button class="kg-video-unmute-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/></svg></button><button class="kg-video-mute-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/></svg></button><input type="range" class="kg-video-volume-slider" max="100" value="100"></div></div></div><figcaption>Customer interaction with the Shop</figcaption></figure><h2 id="coming-up">Coming Up</h2><p>With beta version of Ghala to be out, the building iteration continues</p><ul><li>External PSP (Payment Service Provider) integration</li><li>Delivery Services Integration</li><li>Many more for business experience features</li></ul><h2 id="let-catch-up">Let catch up</h2><p>Let connect in case you have question &#xA0;<a href="https://twitter.com/JovineMutelani">twitter</a></p>]]></content:encoded></item><item><title><![CDATA[Using Association Rules for Recommendation]]></title><description><![CDATA[<h3 id="1-introduction">1. Introduction</h3><h4 id="what-is-this-about-%F0%9F%92%A1">What is this about &#x1F4A1;</h4><p>This is a short guide on building a product recommendation system using association rules. For simple next item suggestion from list of previous items. Also good for tasks that just need a quick recommender</p><h4 id="purpose-of-the-recommendation-system">Purpose of the Recommendation System</h4><p>The main goal of</p>]]></description><link>https://blog.neurotech.africa/using-association-rules-for-recommendation/</link><guid isPermaLink="false">66ad103c5a0e5405410daeac</guid><dc:creator><![CDATA[Edgar Gulay]]></dc:creator><pubDate>Fri, 02 Aug 2024 17:07:39 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/08/image_fx_text_saying_associated_ingredients--.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="1-introduction">1. Introduction</h3><h4 id="what-is-this-about-%F0%9F%92%A1">What is this about &#x1F4A1;</h4><img src="https://blog.neurotech.africa/content/images/2024/08/image_fx_text_saying_associated_ingredients--.jpg" alt="Using Association Rules for Recommendation"><p>This is a short guide on building a product recommendation system using association rules. For simple next item suggestion from list of previous items. Also good for tasks that just need a quick recommender</p><h4 id="purpose-of-the-recommendation-system">Purpose of the Recommendation System</h4><p>The main goal of this recommendation system is to enhance the shopping experience by providing personalized suggestions to customers. By analyzing past transaction data, we can identify patterns and relationships between different products. These insights allow us to recommend complementary ingredients that customers might be interested in, helping them discover new products and make more informed purchasing decisions.</p><h4 id="but-we-wont-do-that">But we won&apos;t do that</h4><p>This tutorial is designed for data enthusiasts, developers, and anyone interested in doing what was described i last section. And since i have you here, we all are going to suggest food ingredients &#x1F60B;</p><h3 id="2-prerequisites">2. Prerequisites</h3><h4 id="basic-knowledge-of-python">Basic Knowledge of Python</h4><p>Before diving into this tutorial, it is essential to have a basic understanding of Python programming. Familiarity with Python&apos;s syntax and basic data structures will help you follow along with the code examples and understand the logic behind the implementation.</p><ul><li>A little bit of pandas then yours good skills in copy pasting if you won&apos;t mind</li></ul><h4 id="introduction-to-association-rule-mining-and-its-importance">Introduction to Association Rule Mining and Its Importance</h4><p>Association rule mining is a data mining technique used to identify interesting relationships or patterns between different items in large datasets. It is particularly useful in market basket analysis, where the goal is to discover associations between products purchased together.<br>in this case ingredients that occur together.</p><h3 id="3-setting-up-the-environment">3. Setting Up the Environment</h3><h4 id="tools-and-libraries">Tools and Libraries</h4><p>We&apos;ll be using Python, along with the <code>pandas</code> and <code>mlxtend</code> libraries for data manipulation and association rule mining.</p><h4 id="installation-instructions">Installation Instructions</h4><p>First, make sure you have Python installed on your system. You can download and install Python from the <a href="https://www.python.org/">official website</a>. Once Python is installed, you&apos;ll need to install the necessary libraries. You can do this using <code>pip</code>, the Python package installer.</p><p>Open your terminal or command prompt and run the following commands:</p><pre><code class="language-sh">pip install pandas mlxtend
</code></pre><h3 id="4-preparing-the-dataset">4. Preparing the Dataset</h3><h4 id="dataset-to-use">Dataset to Use</h4><p>I had a quick chat with chatGPT and i aked for <code>can you mention 50 traditional Tanzanian foods</code> then <code>now for each traditional Tanzanian food you mentioned, mention its ingredients in a python list. create ingredients = [ [food 1], [food 2] ... ]</code>. and this was the return</p><pre><code class="language-python">ingredients = [
    [&quot;maize flour&quot;, &quot;water&quot;, &quot;salt&quot;],  # Ugali
    [&quot;beef&quot;, &quot;goat meat&quot;, &quot;salt&quot;, &quot;spices&quot;],  # Nyama Choma
    [&quot;flour&quot;, &quot;meat&quot;, &quot;vegetables&quot;, &quot;spices&quot;, &quot;oil&quot;],  # Samosa
    [&quot;flour&quot;, &quot;water&quot;, &quot;salt&quot;, &quot;oil&quot;],  # Chapati 
    # ... more  ...
]
</code></pre><p>&#x1F602; who uses salt in ugali ?, if you do you are weird ...</p><h4 id="creating-the-data">Creating the Data</h4><p>We&apos;ll use the <code>TransactionEncoder</code> from the <code>mlxtend</code> library to convert the list of transactions into a format suitable for analysis.</p><pre><code class="language-python">import pandas as pd
from mlxtend.preprocessing import TransactionEncoder

# Initialize the TransactionEncoder
te = TransactionEncoder()
te_ary = te.fit_transform(ingredients)

# Convert to DataFrame
df = pd.DataFrame(te_ary, columns=te.columns_)
</code></pre><h3 id="5-generating-association-rules">5. Generating Association Rules</h3><h4 id="frequent-itemsets">Frequent Itemsets</h4><p>Use the Apriori algorithm to generate frequent itemsets from the transaction data. These itemsets represent combinations of ingredients that appear together frequently.</p><pre><code class="language-python">from mlxtend.frequent_patterns import apriori

# Generate frequent itemsets with a minimum support of 0.2
frequent_itemsets = apriori(df, min_support=0.2, use_colnames=True)
</code></pre><h4 id="association-rules">Association Rules</h4><p>Next, we will derive association rules from the frequent itemsets. These rules will help us understand the relationships between different products.</p><pre><code class="language-python">from mlxtend.frequent_patterns import association_rules

# Generate association rules with a minimum confidence of 0.6
rules = association_rules(frequent_itemsets, metric=&quot;support&quot;, min_threshold=0.2)
</code></pre><h3 id="6-creating-the-recommendation-function">6. Creating the Recommendation Function</h3><p>We will define a function to recommend products based on the association rules. This function will take a list of products and return a dictionary of recommended products with their support percentages.</p><pre><code class="language-python"># here ingredients -&gt; products

def recommend_ingredients(products, rules=rules, top_n=10):
    rules[&apos;antecedents&apos;] = rules[&apos;antecedents&apos;].apply(lambda x: tuple(x))
    rules[&apos;consequents&apos;] = rules[&apos;consequents&apos;].apply(lambda x: tuple(x))
    recommendations = rules[rules[&apos;antecedents&apos;].apply(lambda x: any(product in x for product in products))]
    recommendations = recommendations.sort_values(by=[&apos;confidence&apos;, &apos;lift&apos;], ascending=False)
    top_recommendations = recommendations.head(top_n)

    result = []
    for _, row in top_recommendations.iterrows():
        for item in row[&apos;consequents&apos;]:
            if item not in result:
                result.append(item.lower())
    return result
</code></pre><h3 id="7-testing-the-recommendation-system">7. Testing the Recommendation System</h3><h4 id="example-usage">Example Usage</h4><p>Let&apos;s test the recommendation system with an example list of ingredients.</p><pre><code class="language-python">product_list = [&apos;oil&apos;, &apos;salt&apos;]
prods = recommend_ingredients(product_list)

print(prods)

</code></pre><h4 id="expected-output">Expected Output</h4><p>The output will be a dictionary of recommended products along with their support percentages, showing which ingredients are most frequently associated with the input.</p><figure class="kg-card kg-image-card"><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0b8hkr595fnmzi632mmt.png" class="kg-image" alt="Using Association Rules for Recommendation" loading="lazy"></figure><h3 id="8-conclusion">8. Conclusion</h3><h4 id="recap">Recap</h4><p>In this tutorial, we have walked through the process of building a recommendation system using association rules. We covered data preparation, frequent itemset generation, rule mining, and how to create a recommendation function based on these rules.</p><h4 id="further-exploration">Further Exploration</h4><p>You can further explore by experimenting with different datasets, adjusting the parameters for the Apriori algorithm, and fine-tuning the recommendation function. This will help you understand the nuances of association rule mining and its application in various domains.</p><p>&#x26A0;&#xFE0F; Also go read about the terms in used like support, confidence and lift.</p><h4 id="additional-resources">Additional Resources</h4><ul><li><a href="https://github.com/eddiegulay/Map-Associate">Source Codes for all this stuff at github.com/eddiegulay</a></li><li><a href="https://rasbt.github.io/mlxtend/user_guide/frequent_patterns/association_rules/">Association Rule Mining in Python with mlxtend</a></li><li><a href="https://pandas.pydata.org/pandas-docs/stable/">Pandas Documentation</a></li></ul><h3 id="9-qa-section">9. Q&amp;A Section</h3><h4 id="common-questions">Common Questions</h4><p><strong>What if my dataset is large?</strong></p><ul><li>For large datasets, consider using more efficient algorithms or sampling methods to handle the data efficiently.</li></ul><p><strong>How do I choose the right support and confidence thresholds?</strong></p><ul><li>Experiment with different thresholds to find a balance between generating useful rules and avoiding too many irrelevant ones.</li></ul><p><strong>Can I use this method for other types of data?</strong></p><ul><li>Yes, association rule mining can be applied to various types of transactional data, not just kitchen ingredients.</li></ul><h3 id="10-finally">10. Finally</h3><p>If you are a programmer go finish that project, stop procrastinating. for others it&apos;s been nice to have you here &#x1F642;</p>]]></content:encoded></item><item><title><![CDATA[Sarufi Dashboard Overview]]></title><description><![CDATA[Sarufi has evolved from CLI tool to Web Dashboard now to a bit collaborative workspace way, V0.2. The new dashboard comes with...]]></description><link>https://blog.neurotech.africa/sarufi-dashboard/</link><guid isPermaLink="false">66850c0d5a0e5405410dac00</guid><category><![CDATA[sarufi]]></category><dc:creator><![CDATA[jovine Mutelani]]></dc:creator><pubDate>Sun, 07 Jul 2024 13:10:00 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-03-at-11.42.26.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-03-at-11.42.26.png" alt="Sarufi Dashboard Overview"><p><a href="https://sarufi.io"><strong>Sarufi</strong></a> has evolved from CLI tool to Web Dashboard now to a bit collaborative workspace way, V0.2. The new dashboard comes with a lot of improvement starting with workflow.</p><p>This is a detailed guide on the new dashboard, to get you familiar walking around your Sarufi garden of chatbots and users. The platform keeps evolving from time to time. This sometimes makes it hard for legacy and new users to navigate around properly.</p><p>As always taking a walk around your garden makes proper maintenance &#xA0;easy. After some time in design and Development we have Sarufi V0.2 out on the main site.</p><h1 id="overall-overview">Overall overview</h1><p>Using sketches makes stuffs easy to grasp, I will be using a lot of them in this walk. Starting with Dashboard as whole seen below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-06-at-16.15.13.png" class="kg-image" alt="Sarufi Dashboard Overview" loading="lazy" width="1690" height="1426" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/07/Screenshot-2024-07-06-at-16.15.13.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/07/Screenshot-2024-07-06-at-16.15.13.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/07/Screenshot-2024-07-06-at-16.15.13.png 1600w, https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-06-at-16.15.13.png 1690w" sizes="(min-width: 720px) 720px"><figcaption>Sarufi Dashboard: Sketch Overview</figcaption></figure><p>The diagram show the overview of the dashboard. Lets take a look into these major changes available. The major change is introduction of Workspace as workbench for collaboration among team members.</p><p>A user owns a workspace that holds all resources as simple as that, meaning the change is ownership order. Each workspace is an independent entity that houses its own resources.</p><p>Lets start our quick navigation around our sarufi garden.</p><h2 id="workspace">Workspace</h2><p>The workspace as said above is the major part of the system, lets explore what it contains. On sign up, you will be required to create your profile and create a workspace as its your <em>workbench</em>.</p><p>In workspace, you will find a lot of section but to be precise the key parts are <strong>chatbots</strong>, <strong>members</strong>, <strong>Authorization(API-key)</strong>, <strong>Credits</strong> and <strong>Usage analytics</strong>.</p><p>Currently, you can only own up to<strong> 5</strong> <strong>workspaces</strong> but you can be a member of as many as possible workspaces, opening the world of collaboration to developers.</p><p><em>The interactive collaboration feature is still under development as of this being published, so you may face issues when modifying the same chatbot.</em></p><h3 id="chatbots">Chatbots</h3><p>On entry, you will be defaulted to chatbot section. On this you can view and create chatbot and much more. Chatbot is the pillar of Sarufi, lets get a bit of overview of it. This is highly digested structure of what you will find into your chatbot.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-04-at-10.53.48.png" class="kg-image" alt="Sarufi Dashboard Overview" loading="lazy" width="2000" height="961" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/07/Screenshot-2024-07-04-at-10.53.48.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/07/Screenshot-2024-07-04-at-10.53.48.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/07/Screenshot-2024-07-04-at-10.53.48.png 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/07/Screenshot-2024-07-04-at-10.53.48.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Sarufi Dashboard: Chatbot Overview</figcaption></figure><p>Taking a look at a structural overview of chatbot &#xA0;above, you can explicitly see there are two types of chatbots that you can create, namely <strong>Flow-based</strong> , <strong>LLM based &#xA0;</strong>but you also have an option to make an <strong>Hybrid Version</strong>. Each chatbot type with its core supported functionalities. Visit our <a href="https://docs.sarufi.io/"><em>documentation</em></a> to learn more on chatbot features</p><p>We make building chatbots as easy as possible. Bring your solution into use within few minutes of ideation.</p><h3 id="members">Members</h3><p>The idea of workspace is straight forward to get people in a team work together. Currently, members are of two layers namely <em>Admin(Creator)</em> and<em> Invited(Team member)</em>. With Admin as high level controller, invited members have limited permissions. </p><p>You will find all members of the workspace, depending on your level of privileges you might have allowable actions to perform.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/07/wkspace-users.png" class="kg-image" alt="Sarufi Dashboard Overview" loading="lazy" width="2000" height="773" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/07/wkspace-users.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/07/wkspace-users.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/07/wkspace-users.png 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/07/wkspace-users.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Sarufi Dashboard: Members</figcaption></figure><h3 id="credits">Credits</h3><p>Workspace as an upgrade of Sarufi introduces a payment plan. Its pay for what you use. You will only be charged per conversations. </p><p>Sarufi conversation, this is complete back and forth exchange of info between your user and chatbot. A conversation is a &#xA0;a complete exchange of Information as &#xA0;illustrated below. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-04-at-11.29.53.png" class="kg-image" alt="Sarufi Dashboard Overview" loading="lazy" width="2000" height="859" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/07/Screenshot-2024-07-04-at-11.29.53.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/07/Screenshot-2024-07-04-at-11.29.53.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/07/Screenshot-2024-07-04-at-11.29.53.png 1600w, https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-04-at-11.29.53.png 2124w" sizes="(min-width: 720px) 720px"><figcaption>Sarufi Conversation : Illustration</figcaption></figure><p>The cost depends on what engine processes your conversation. &#xA0;Here is the break down. </p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th style="text-align:left">Chatbot Type</th>
<th style="text-align:right">Conversation Cost(TSh)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">Flow Based</td>
<td style="text-align:right">6</td>
</tr>
<tr>
<td style="text-align:left">LLM Based</td>
<td style="text-align:right">10</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>On creating your first workspace you will have 2K Tsh of credits.</p><p>To top up your account, is straight forward. Navigate to your workspace<em>(On your profile icon&gt;&gt; View your workspace)</em> &gt;&gt; Purchase credits &gt;&gt; Follow instructions</p><h3 id="workspace-usage">Workspace Usage</h3><p>In this section you will find all you need to know about your workspace credits usage. &#xA0;You will have probably 3 sections on usage, namely <strong>Overall usage</strong>, <strong>Chatbot activity</strong> and <strong>Activity Logs. </strong> You will be able to view how usage of your agents grows over time. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/07/Screenshot-2024-07-04-at-11.41.54.png" class="kg-image" alt="Sarufi Dashboard Overview" loading="lazy" width="2000" height="895" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/07/Screenshot-2024-07-04-at-11.41.54.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/07/Screenshot-2024-07-04-at-11.41.54.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/07/Screenshot-2024-07-04-at-11.41.54.png 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/07/Screenshot-2024-07-04-at-11.41.54.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Sarufi Dashboard: Workspace Usage</figcaption></figure><p>To this point we mark this as the end of our walk in your new dashboard. With development in progress expect to find more than highlighted features.</p><h2 id="conclusion">Conclusion</h2><p>This is the end of our quick walk through Sarufi Dashboard. The Platform keeps evolving increasing support for new functionalities/features. The new dashboard is to enhance collaboration. </p><blockquote>&quot;Great software isn&apos;t built by individual heroes; it&apos;s the result of teams working together, leveraging each other&apos;s strengths and covering each other&apos;s weaknesses.&quot; &#x2013; John Carmack</blockquote>]]></content:encoded></item><item><title><![CDATA[Don't Get Left Behind in the Customer Service Revolution!]]></title><description><![CDATA[<p>Remember those old cell phones that were super heavy and the internet that sounded like a broken robot? Businesses that still use that stuff like outdated computer programs need an update! Today&apos;s game-changer is AI chatbots and the prize? Your customers&apos; attention (and their money!).</p><p>Consider it:</p>]]></description><link>https://blog.neurotech.africa/dont-get-left-behind-in-the-customer-service-revolution/</link><guid isPermaLink="false">66603f195a0e5405410dabb6</guid><dc:creator><![CDATA[Chris Mendoza]]></dc:creator><pubDate>Wed, 05 Jun 2024 11:36:09 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/06/OIG1--4-.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/06/OIG1--4-.jpeg" alt="Don&apos;t Get Left Behind in the Customer Service Revolution!"><p>Remember those old cell phones that were super heavy and the internet that sounded like a broken robot? Businesses that still use that stuff like outdated computer programs need an update! Today&apos;s game-changer is AI chatbots and the prize? Your customers&apos; attention (and their money!).</p><p>Consider it: a potential customer has a burning question at 3 am. With an AI chatbot, they get an instant, helpful answer. Your competitor? Radio silence. Who gets the sale? <strong>The business that embraced the future, of course!</strong></p><p>Like social media marketing dethroned traditional advertising, AI chatbots are revolutionizing customer service. , and those who deploy them on platforms like WhatsApp and social media will conquer the customer experience battleground. Here&apos;s why:</p><p><strong>WhatsApp: Your Customer&apos;s Most Intimate Space</strong></p><p>Imagine this: you can have a conversation with your favorite store directly within WhatsApp, your go-to messaging app. &#xA0;AI chatbots make this a reality. &#xA0;Customers can ask questions, track orders, or even initiate returns, all within the familiar comfort of WhatsApp. It&apos;s a convenient, personalized service that keeps them coming back for more.</p><p><strong>Social Media: Where Conversations Already Happen</strong></p><p>Social media is a constant buzz of connection. &#xA0;AI chatbots placed strategically on Facebook Messenger, Instagram, or Twitter can jump right into those conversations. &#xA0;They can answer product inquiries, schedule appointments, or even provide real-time support during live streams. &#xA0;It&apos;s seamless customer engagement that fosters brand loyalty.</p><p><strong>The Power of Integration: A Match Made in Customer Service Heaven</strong></p><p>The beauty of AI chatbots on these platforms is their integration. &#xA0;Data collected through WhatsApp interactions can be used to personalize Facebook Messenger experiences. &#xA0;Social media comments can be automatically routed to the chatbot for swift resolution. &#xA0;It&apos;s a unified customer service front that ensures no question goes unanswered, no matter the platform.</p><p><strong>Beyond WhatsApp and Social Media: The Chatbot Revolution Spreads</strong></p><p>While these platforms are prime battlegrounds, the chatbot revolution isn&apos;t limited. &#xA0;Imagine chatbots seamlessly integrated into your website, guiding customers through the buying journey. &#xA0;Think voice assistants powered by AI, offering hands-free support. &#xA0;The possibilities are endless.</p><p><strong>The Takeaway: Embrace the Chatbot Uprising or Face Extinction</strong></p><p>The message is clear: AI chatbots are no passing fad. They&apos;re the future of customer service. &#xA0;By deploying them on platforms like WhatsApp and social media, businesses can create a connected, personalized experience that keeps customers engaged and loyal. &#xA0;Don&apos;t be the dinosaur clinging to outdated methods. &#xA0;Embrace the chatbot uprising and watch your business thrive in the new era of customer engagement.</p>]]></content:encoded></item><item><title><![CDATA[Neurotech Africa is Nominated for the Digital Awards 2023, Best Startup of the Year.]]></title><description><![CDATA[<p>We&apos;re Nominated for the Tanzania Digital Awards 2024!</p><p>We&apos;re thrilled to announce that Neurotech Africa has been nominated for the <strong>Digital Awards 2023</strong> in the category of <strong>Best Startup of the Year</strong>!</p><p>This is a huge honor for our entire team, and we&apos;re incredibly</p>]]></description><link>https://blog.neurotech.africa/sarufi-v02-beta-is-out/</link><guid isPermaLink="false">66445e1a5a0e5405410dab81</guid><dc:creator><![CDATA[Omega Seyongwe]]></dc:creator><pubDate>Wed, 15 May 2024 14:51:10 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/05/VOTE-FOR-NEUROTECH-AFRICA.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/05/VOTE-FOR-NEUROTECH-AFRICA.jpg" alt="Neurotech Africa is Nominated for the Digital Awards 2023, Best Startup of the Year."><p>We&apos;re Nominated for the Tanzania Digital Awards 2024!</p><p>We&apos;re thrilled to announce that Neurotech Africa has been nominated for the <strong>Digital Awards 2023</strong> in the category of <strong>Best Startup of the Year</strong>!</p><p>This is a huge honor for our entire team, and we&apos;re incredibly grateful for the recognition. It wouldn&apos;t be possible without the dedication and hard work of everyone involved.</p><p>There&apos;s more exciting news! Our founder and CEO, <a href="https://www.linkedin.com/in/kalebu-gwalugano/">Kalebu Gwalugano</a>, is also nominated in the category of <strong>Best Innovator of the Year</strong>.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/05/VOTE-FOR-KALEBU.jpg" class="kg-image" alt="Neurotech Africa is Nominated for the Digital Awards 2023, Best Startup of the Year." loading="lazy" width="2000" height="2000" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/05/VOTE-FOR-KALEBU.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/05/VOTE-FOR-KALEBU.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/05/VOTE-FOR-KALEBU.jpg 1600w, https://blog.neurotech.africa/content/images/2024/05/VOTE-FOR-KALEBU.jpg 2000w" sizes="(min-width: 720px) 720px"></figure><p>We&apos;d be so grateful if you could take a moment to vote for Neurotech Africa and Kalebu. Here&apos;s how you can do it:</p><ol><li>Visit the Digital Awards website: <a href="https://digitalawards.co.tz/">https://digitalawards.co.tz/</a></li><li>Sign up or Sign in (if you already have an account).</li><li>Click the &quot;Vote&quot; button.</li><li>Scroll down to no (7) - Digital Innovation, Section <strong>(G) Digital Innovator</strong> <strong>of the Year</strong> category, and click <strong>&quot;Vote&quot; for Kalebu Gwalugano.</strong></li><li>Then, go to section <strong>(H) Startup of the Year</strong> section and click<strong> &quot;Vote&quot; for Neurotech Africa.</strong></li></ol><p>Thank you for your incredible support! Every vote counts.</p><p>Let&apos;s bring these awards home!</p>]]></content:encoded></item><item><title><![CDATA[Why WhatsApp is Whispering the Future of E-commerce (and How AI Chatbots Will Shout it From the Rooftops)]]></title><description><![CDATA[<p>E-commerce is booming, but competition is fierce. Businesses are constantly searching for new ways to connect with customers and create a seamless shopping experience. WhatsApp: the messaging platform with over 2 billion users that&apos;s poised to revolutionize the way we shop online. Here&apos;s why:</p><p><strong>A Familiar</strong></p>]]></description><link>https://blog.neurotech.africa/why-whatsapp-is-whispering-the-future-of-e-commerce-and-how-ai-chatbots-will-shout-it-from-the-rooftops/</link><guid isPermaLink="false">663b22cb5a0e5405410dab33</guid><dc:creator><![CDATA[Chris Mendoza]]></dc:creator><pubDate>Wed, 08 May 2024 07:04:41 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/05/1715078829731.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/05/1715078829731.png" alt="Why WhatsApp is Whispering the Future of E-commerce (and How AI Chatbots Will Shout it From the Rooftops)"><p>E-commerce is booming, but competition is fierce. Businesses are constantly searching for new ways to connect with customers and create a seamless shopping experience. WhatsApp: the messaging platform with over 2 billion users that&apos;s poised to revolutionize the way we shop online. Here&apos;s why:</p><p><strong>A Familiar Friend in a Digital World:</strong></p><p>People are already comfortable using WhatsApp to chat with friends and family. This familiarity translates into a lower barrier to entry for businesses. Customers don&apos;t need to download a new app or create a new account &#x2013; they can simply message a familiar number.</p><p><strong>Hyper-Personalized Shopping:</strong></p><p>WhatsApp fosters a one-on-one connection, unlike any other platform. Businesses can leverage this to personalize the shopping experience. Imagine a customer browsing shoes online. They can send a quick WhatsApp message asking about size availability or requesting a specific picture. This real-time interaction builds trust and fosters a sense of customer service that&apos;s unmatched by traditional e-commerce platforms.</p><p><strong>The Power of Conversational Commerce:</strong></p><p>Gone are the days of static product pages. WhatsApp allows for dynamic, two-way conversations. AI chatbots can be integrated into your WhatsApp business account to handle basic inquiries, recommend products based on past purchases, and even guide customers through the checkout process. These chatbots become tireless salespeople, available 24/7 to answer questions and close deals.</p><p><strong>Building Relationships, Not Just Transactions:</strong></p><p>WhatsApp goes beyond just selling products. It fosters relationships. Businesses can use broadcast lists to send out special offers, personalized recommendations, and even loyalty program updates. This constant, yet non-intrusive, communication keeps customers engaged and strengthens brand loyalty.</p><p><strong>A Global Marketplace in Your Pocket:</strong></p><p>WhatsApp transcends geographical boundaries. With its massive user base and global reach, businesses can tap into new markets and connect with international customers with ease.</p><p><strong>The Future is Conversational:</strong></p><p>The way we interact with technology is shifting towards a more conversational approach. Voice assistants and chatbots are becoming commonplace. WhatsApp, with its already established user base and focus on real-time communication, is perfectly positioned to be at the forefront of this conversational commerce revolution.</p><p><strong>So, how can you get started?</strong></p><ul><li><strong>Set up a WhatsApp Business Account:</strong> This allows you to create a business profile, manage customer interactions, and utilize features like automated greetings and quick replies.</li><li><strong>Develop a Compelling Chatbot Strategy:</strong> Define what tasks you want your chatbot to handle and invest in creating a user-friendly experience. ( If you can&apos;t build one, don&apos;t worry, Experts from <a href="https://sarufi.io/">Neurotech Africa</a> will help you build one)</li><li><strong>Personalize the Experience:</strong> Use customer data to craft targeted messages and recommendations.</li><li><strong>Focus on Building Relationships:</strong> Don&apos;t be overly promotional. Use WhatsApp to connect with your customers on a more personal level.</li></ul><p>WhatsApp is whispering the future of e-commerce, and with the help of AI chatbots, it will soon be shouting it from the rooftops. Are you ready to listen?</p>]]></content:encoded></item><item><title><![CDATA[Is WhatsApp the Secret Weapon Your Business Needs?]]></title><description><![CDATA[<p>Let&apos;s be honest, reaching customers these days feels like trying to herd kittens. Emails get buried, and social media is a constant battle for attention and phone calls? Forget about it! &#xA0;There&apos;s gotta be a better way, right?</p><p>Well, there is! And guess what? It&</p>]]></description><link>https://blog.neurotech.africa/whatsapp-the-catalyst-of-business-success/</link><guid isPermaLink="false">663340155a0e5405410daa87</guid><dc:creator><![CDATA[Chris Mendoza]]></dc:creator><pubDate>Thu, 02 May 2024 11:33:03 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/05/Blog_Banner_v2-01.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/05/Blog_Banner_v2-01.jpg" alt="Is WhatsApp the Secret Weapon Your Business Needs?"><p>Let&apos;s be honest, reaching customers these days feels like trying to herd kittens. Emails get buried, and social media is a constant battle for attention and phone calls? Forget about it! &#xA0;There&apos;s gotta be a better way, right?</p><p>Well, there is! And guess what? It&apos;s probably already sitting in your pocket &#x2013; <strong>WhatsApp!</strong></p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/05/63d0cf5755b6925728afe350_WhatsApp-Monthly-Active-Users.jpg" class="kg-image" alt="Is WhatsApp the Secret Weapon Your Business Needs?" loading="lazy" width="2000" height="1571" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/05/63d0cf5755b6925728afe350_WhatsApp-Monthly-Active-Users.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/05/63d0cf5755b6925728afe350_WhatsApp-Monthly-Active-Users.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/05/63d0cf5755b6925728afe350_WhatsApp-Monthly-Active-Users.jpg 1600w, https://blog.neurotech.africa/content/images/2024/05/63d0cf5755b6925728afe350_WhatsApp-Monthly-Active-Users.jpg 2100w" sizes="(min-width: 720px) 720px"></figure><p> This messaging giant boasts over 2 billion monthly users &#x2013; that&apos;s more people than the entire population of North and South America combined! &#xA0;And guess what they&apos;re using it for? Chatting with friends, family, and... wait for it... <strong>potentially YOUR business!</strong></p><p>Hold on, you might think, &quot;WhatsApp is just for casual chats, right?&quot; Wrong! &#xA0;It&apos;s becoming a game-changer for businesses like yours and mine to connect with customers in a way that&apos;s personal, convenient, and crazy effective. &#xA0;Here&apos;s the scoop:</p><p><strong>1. It&apos;s Like Texting Your Best Friend (But Way More Professional):</strong> &#xA0;People love WhatsApp because it&apos;s easy. &#xA0;Customers can chat with you on their terms, whenever it fits their schedule. &#xA0;No need to download a new app or create an account &#x2013; it&apos;s already there! &#xA0;This makes them feel <strong>valued</strong> and way more likely to <strong>engage</strong> with your business.</p><p><strong>2. Ditch the Annoying Bots and utilize the AI Superhero!</strong> Now things get really cool. &#xA0;Imagine having a <strong>super-powered AI assistant</strong> working for you 24/7 on WhatsApp. &#xA0;This is where <strong>chatbot technology</strong> swoops in to save the day.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/05/Chat-with-our-WhatsApp-Chatbot-3.png" class="kg-image" alt="Is WhatsApp the Secret Weapon Your Business Needs?" loading="lazy" width="720" height="300" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/05/Chat-with-our-WhatsApp-Chatbot-3.png 600w, https://blog.neurotech.africa/content/images/2024/05/Chat-with-our-WhatsApp-Chatbot-3.png 720w" sizes="(min-width: 720px) 720px"></figure><p><strong>Think of it as a tireless teammate who can:</strong></p><ul><li>Answer those same customer questions you get all the time (you know, the ones that make you want to pull your hair out?)</li><li>Process orders in a flash (no more manual data entry!)</li><li>Schedule appointments like a champ (freeing you up for more important stuff)</li><li>Even provide basic customer support (so you can focus on the trickier issues)</li></ul><p>This frees you and your team up to focus on what really matters, while still keeping your customers happy as clams.</p><p><strong>3. Streamlining Operations? We Got This!</strong> &#xA0;Sick of repetitive tasks slowing you down? &#xA0;Chatbots can handle those too! &#xA0;Imagine <strong>automating appointment confirmations, order tracking, or even sending out special offers.</strong> &#xA0;This saves you tons of time, reduces errors, and keeps your business running like a well-oiled machine.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/05/Whatsapp-chatboooot-1.png" class="kg-image" alt="Is WhatsApp the Secret Weapon Your Business Needs?" loading="lazy" width="1053" height="720" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/05/Whatsapp-chatboooot-1.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/05/Whatsapp-chatboooot-1.png 1000w, https://blog.neurotech.africa/content/images/2024/05/Whatsapp-chatboooot-1.png 1053w" sizes="(min-width: 720px) 720px"></figure><p><strong>4. Building Relationships (and Sales!) That Don&apos;t Feel Salesy:</strong> &#xA0; WhatsApp lets you have <strong>real conversations</strong> with customers. You can send pictures, videos, and even voice messages, making interactions more <strong>engaging</strong> and less like a robot reading from a script. &#xA0;This helps build <strong>trust and loyalty</strong>, which translates to <strong>more sales and returning customers</strong> &#x2013; a win-win!</p><p><strong>So, How Do We Get This WhatsApp Magic Working for Us?</strong></p><p>Don&apos;t worry, there are tons of resources available to help your business set up a WhatsApp chatbot technology. &#xA0;It&apos;s actually way easier than you might think!</p><p><strong>Think about it:</strong> While your competitors are still stuck in the old-school marketing rut, you&apos;ll be using the power of WhatsApp and AI to <strong>launch your business growth into the stratosphere!</strong> &#xA0;</p><p>Don&apos;t think twice, start building Your Super-Smart WhatsApp AI Chatbot with <a href="https://sarufi.io/">Sarufi </a>&amp; Fuel Your Business Success!</p><p>Click the Link &#xA0;<a href="https://sarufi.io/">Sarufi</a> to explore the Magic of Artificial Intelligence Chatbots.</p>]]></content:encoded></item><item><title><![CDATA[A Huge Success: 7th GenAI Meetup + Hackathon]]></title><description><![CDATA[<p>Great news to share from last week&apos;s Generative AI MeetUp and Hackathon hosted by us <a href="www.linkedin.com/company/neurotech-hq/">Neurotech Africa</a> !</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg" class="kg-image" alt loading="lazy" width="2000" height="2000" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 1600w, https://blog.neurotech.africa/content/images/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 2000w" sizes="(min-width: 720px) 720px"></figure><p>The event, which focused on using Swahili AI Model built by <a href="https://www.linkedin.com/in/michael-s-mollel-phd-9b522634/">Dr. Michael S. Mollel, PHD </a> &#xA0;to tackle real-world problems, brought together a fantastic group of attendees - over 50!</p>]]></description><link>https://blog.neurotech.africa/a-huge-success-of-7th-genai-meetup/</link><guid isPermaLink="false">662f57f25a0e5405410daa34</guid><dc:creator><![CDATA[Omega Seyongwe]]></dc:creator><pubDate>Tue, 30 Apr 2024 10:13:15 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/04/IMG_8738-1.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8738-1.JPG" alt="A Huge Success: 7th GenAI Meetup + Hackathon"><p>Great news to share from last week&apos;s Generative AI MeetUp and Hackathon hosted by us <a href="www.linkedin.com/company/neurotech-hq/">Neurotech Africa</a> !</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="2000" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 1600w, https://blog.neurotech.africa/content/images/2024/04/7-TH-GENAI-HACKHATHON-WINNERS.jpg 2000w" sizes="(min-width: 720px) 720px"></figure><p>The event, which focused on using Swahili AI Model built by <a href="https://www.linkedin.com/in/michael-s-mollel-phd-9b522634/">Dr. Michael S. Mollel, PHD </a> &#xA0;to tackle real-world problems, brought together a fantastic group of attendees - over 50! &#x200D;Big thanks to all who participated!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8735.JPG" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8735.JPG 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8735.JPG 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8735.JPG 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8735.JPG 2400w" sizes="(min-width: 720px) 720px"><figcaption><em>Partcipants of the 7th GenAI MeetUp</em></figcaption></figure><p>Here are some of the amazing things that came out of the hackathon:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8601-1.JPG" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8601-1.JPG 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8601-1.JPG 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8601-1.JPG 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8601-1.JPG 2400w" sizes="(min-width: 720px) 720px"><figcaption><em>Partcipants of the 7th GenAI MeetUp in a session.</em></figcaption></figure><p>Team NURU under the leadership of <a href="https://www.linkedin.com/in/fredy-german-mgimba-174081146/">Fredy German Mgimba</a> &#xA0;took the top spot with their innovative LLM that generates Swahili programming code snippets for developers. This is a game-changer for Swahili-speaking programmers!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8749.JPG" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8749.JPG 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8749.JPG 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8749.JPG 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8749.JPG 2400w" sizes="(min-width: 720px) 720px"><figcaption>Team NURU , 1st spot winners of the hackathon.</figcaption></figure><p>Team Maudhui AI &#xA0;under the leadership of <a href="https://www.linkedin.com/in/annagracem/">Annagrace Malamsha</a> , <a href="https://www.linkedin.com/in/edgargulay/">Edgar Gulay</a> , <a href="https://www.linkedin.com/in/shaaban-daudi-531a5b220/">Shaaban Daudi</a>, Godson Ntungi and &#xA0;<a href="https://www.linkedin.com/in/mgasa-lucas/">Mgasa Lucas</a> &#xA0;impressed everyone with their model that transforms Swahili text into visual content. Think &quot;show, don&apos;t tell&quot; on a whole new level!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8754.JPG" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8754.JPG 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8754.JPG 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8754.JPG 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8754.JPG 2400w" sizes="(min-width: 720px) 720px"><figcaption>Team Maudhui , 1st spot winners of the hackathon.</figcaption></figure><p>Team Sheria under the leadership of <a href="https://www.linkedin.com/in/gabriel-minzemalulu/">Gabriel Minzemalulu</a> &#xA0;and <a href="https://www.linkedin.com/in/adam-katani-b791b2259/">Adam Katani</a> developed a super helpful LLM chatbot that makes Swahili legal information more accessible. Huge win for the Swahili legal community!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8372.jpg" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8372.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8372.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8372.jpg 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8372.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption><em>Partcipants of the 7th GenAI MeetUp</em></figcaption></figure><p>A big shoutout to the Mentors, Judges , participants winning teams and everyone who participated!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8444.jpg" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8444.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8444.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8444.jpg 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8444.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption><a href="https://www.linkedin.com/in/nsomazr">Zephania Reuben</a> Mentoring the hackers.</figcaption></figure><p>This hackathon is a great example of how people in the AI Community , academia and developers can come together to build powerful AI solutions in Swahili.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/04/IMG_8485.jpg" class="kg-image" alt="A Huge Success: 7th GenAI Meetup + Hackathon" loading="lazy" width="2000" height="1333" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/04/IMG_8485.jpg 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/04/IMG_8485.jpg 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/04/IMG_8485.jpg 1600w, https://blog.neurotech.africa/content/images/size/w2400/2024/04/IMG_8485.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><p><strong>#SwahiliAI</strong> <strong>#AIforGood</strong> <strong>#NeurotechAfrica</strong></p>]]></content:encoded></item><item><title><![CDATA[Approaches to Handle Uncertainty]]></title><description><![CDATA[<p>The world is inherently uncertain, characterized by imprecise measurements, ambiguous definitions, and incomplete knowledge. Uncertainty pervades various aspects of our lives, from everyday facts like temperature readings to complex decisions like evaluating a president&apos;s performance or identifying potential hazards. Despite this uncertainty, humans often make successful decisions, relying</p>]]></description><link>https://blog.neurotech.africa/approaches-to-handle-uncertainty/</link><guid isPermaLink="false">66073e115a0e5405410da9a5</guid><dc:creator><![CDATA[Edgar Gulay]]></dc:creator><pubDate>Fri, 29 Mar 2024 22:29:23 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/03/Default_technical_process_and_Approaches_to_Handle_Uncertainty_2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/03/Default_technical_process_and_Approaches_to_Handle_Uncertainty_2.jpg" alt="Approaches to Handle Uncertainty"><p>The world is inherently uncertain, characterized by imprecise measurements, ambiguous definitions, and incomplete knowledge. Uncertainty pervades various aspects of our lives, from everyday facts like temperature readings to complex decisions like evaluating a president&apos;s performance or identifying potential hazards. Despite this uncertainty, humans often make successful decisions, relying on heuristic reasoning and empirical observations.</p><h3 id="representation-of-uncertainty">Representation of Uncertainty</h3><p><br>To effectively address uncertainty, various models have been developed. Two prominent models are the non-deterministic model and the probabilistic model. The non-deterministic model represents uncertainty through a set of possible values or scenarios, while the probabilistic model assigns probabilities to different outcomes. Each model has its application domain and is suited to different types of uncertainty.</p><h3 id="sources-of-uncertainty"><br>Sources of Uncertainty</h3><p>Uncertainty arises from various sources, including uncertain data and uncertain knowledge. Uncertain data may be missing, unreliable, or ambiguous, while uncertain knowledge may stem from incomplete understanding or probabilistic effects. Representing and reasoning with uncertain information pose challenges, especially when the underlying system&apos;s complexity limits our ability to make accurate predictions or inferences.</p><h3 id="reasoning-under-uncertainty"><br>Reasoning Under Uncertainty</h3><p>Despite uncertainty, humans employ various strategies for reasoning and decision-making. These include heuristic approaches, empirical associations based on experience, and probabilistic reasoning using objective or subjective probabilities. Rational decision-making involves assessing the probabilities and utilities of different outcomes to select the action with the highest expected utility, following the principle of Maximum Expected Utility.</p><h3 id="some-relevant-factors"><br>Some Relevant Factors</h3><p>In addressing uncertainty, several factors must be considered, including the expressiveness of representation, comprehensibility, correctness, computational complexity, and reproducibility. Representations must adequately capture human concepts and confidence levels, facilitate reasoning, and produce meaningful results efficiently and consistently.</p><h3 id="basics-of-probability-theory">Basics of Probability Theory</h3><p>Probability theory provides a mathematical framework for processing uncertain information. It involves defining a sample space of possible events and assigning probabilities to these events. Probabilities range from 0 to 1, with the total probability of the sample space being 1. Probabilistic reasoning allows for the calculation of compound probabilities, conditional probabilities, and joint probabilities, essential for making informed decisions under uncertainty.<br></p><h3 id="approaches-to-handle-uncertainty">Approaches to Handle Uncertainty</h3><p>Several approaches address uncertainty, including Bayesian approaches, Dempster-Shafer theory, hidden Markov models, certainty factors, and fuzzy logic. Bayesian methods derive probabilities based on observed evidence and prior beliefs, while Dempster-Shafer theory combines evidence using mass probability functions. Hidden Markov models deal with hidden states, while certainty factors express confidence in hypotheses. Fuzzy logic extends traditional binary logic to handle degrees of membership in sets, allowing for more flexible reasoning.<br>In Summary Let&apos;s delve into each approach in details:</p><p>Certainly! Let&apos;s delve into each approach in detail using the provided notes:</p><p><strong>1. Bayesian Approaches:</strong></p><p>Bayesian approaches utilize probabilities to represent uncertainty and make decisions based on observed evidence and prior beliefs.</p><p><strong>Process:</strong></p><ul><li>They derive probabilities of events or hypotheses given observed evidence using Bayes&apos; theorem or its variants.</li><li>These approaches involve updating prior beliefs with new evidence to obtain posterior probabilities.</li></ul><p><strong>Application:</strong></p><ul><li>Bayesian methods are widely used in various domains such as decision-making, pattern recognition, and machine learning.</li></ul><p><strong>Advantages:</strong></p><ul><li>They provide a sound theoretical foundation for reasoning under uncertainty.</li><li>Bayesian methods offer a well-defined semantics for decision-making.</li></ul><p><strong>Challenges:</strong></p><ul><li>They require substantial amounts of probability data, which may not always be available.</li><li>Subjective evidence might not always be reliable, leading to potential biases in decision-making.</li></ul><p><strong>2. Dempster-Shafer Theory:</strong></p><p>Dempster-Shafer theory is a mathematical framework for reasoning under uncertainty, focusing on combining evidence from different sources.</p><p><strong>Process:</strong></p><ul><li>It employs mass probability functions to represent belief or uncertainty associated with different propositions.</li><li>These mass probability functions assign values from 0 to 1 to elements in a frame of discernment, indicating the degree of belief.</li><li>Dempster&apos;s rule of combination allows for the combination of evidence from multiple sources.</li></ul><p><strong>Application:</strong></p><ul><li>Dempster-Shafer theory finds applications in fields such as decision support systems, fault diagnosis, and risk assessment.</li></ul><p><strong>Advantages:</strong></p><ul><li>It offers a clear and rigorous foundation for reasoning under uncertainty.</li><li>Dempster-Shafer theory enables the expression of confidence intervals, providing insights into the certainty about certainty.</li></ul><p><strong>Challenges:</strong></p><ul><li>Determining mass probability functions can be non-intuitive and computationally intensive.</li><li>Combining non-independent evidence may yield counterintuitive results due to normalization issues.</li></ul><p><strong>3. Hidden Markov Models (HMMs):</strong></p><p> Hidden Markov models are probabilistic models used to model sequences of observable events when the underlying states are not directly observable.</p><p><strong>Process:</strong></p><ul><li>HMMs consist of a set of hidden states, observable events, transition probabilities between states, and emission probabilities for each event.</li><li>They employ the Viterbi algorithm or the forward-backward algorithm for inference and learning.</li></ul><p><strong>Application:</strong></p><ul><li>HMMs are extensively used in speech recognition, natural language processing, bioinformatics, and financial modeling.</li></ul><p><strong>Advantages:</strong></p><ul><li>They can capture complex temporal dependencies and handle sequences of observations effectively.</li><li>HMMs allow for learning model parameters from data, enabling adaptation to different scenarios.</li></ul><p><strong>Challenges:</strong></p><ul><li>Determining the optimal number of states and model parameters can be challenging.</li><li>HMMs may suffer from the &quot;curse of dimensionality&quot; when dealing with large state spaces.</li></ul><p><strong>4. Certainty Factors:</strong></p><p>Certainty factors are used to express the degree of belief or confidence in a hypothesis given observed evidence.</p><p><strong>Process:</strong></p><ul><li>They denote the belief or disbelief in a hypothesis based on the presence or absence of evidence.</li><li>Certainty factors range between -1 (denial of the hypothesis) and 1 (confirmation of the hypothesis).</li></ul><p><strong>Application:</strong></p><ul><li>Certainty factors are commonly employed in expert systems, diagnostic systems, and decision support systems.</li></ul><p><strong>Advantages:</strong></p><ul><li>They offer a simple implementation and provide a way to model human experts&apos; beliefs effectively.</li><li>Certainty factors allow for the expression of both belief and disbelief in hypotheses.</li></ul><p><strong>Challenges:</strong></p><ul><li>Certainty factors may require adjustments or updates when new evidence becomes available.</li><li>They may not always align with probabilistic reasoning, leading to potential inconsistencies.</li></ul><p><strong>5. Fuzzy Logic:</strong></p><p>Fuzzy logic extends traditional binary logic to handle degrees of membership in sets, allowing for more flexible reasoning.</p><p><strong>Process:</strong></p><ul><li>It represents uncertainty by assigning degrees of membership to elements in sets using fuzzy membership functions.</li><li>Fuzzy logic employs fuzzy rules to infer conclusions from fuzzy inputs and outputs.</li></ul><p><strong>Application:</strong></p><ul><li>Fuzzy logic finds applications in control systems, decision support systems, and pattern recognition.</li></ul><p><strong>Advantages:</strong></p><ul><li>It provides a formal framework for representing and reasoning with uncertain or imprecise information.</li><li>Fuzzy logic allows for a more intuitive and human-like representation of uncertainty.</li></ul><p><strong>Challenges:</strong></p><ul><li>Defining appropriate membership functions and fuzzy rules can be subjective and require domain expertise.</li><li>Fuzzy logic systems may not always produce consistent or interpretable results, especially in complex scenarios.</li></ul><p>Credits : <a href="https://udsm-ai.gitbook.io/udsm-ai/">UDSM AI Gitbook</a></p>]]></content:encoded></item><item><title><![CDATA[Deploying Flask App in Digital Ocean Droplet]]></title><description><![CDATA[<p><br>Recently, I read a blog post about customizing APIs from Gemini Models, but when I tried following the instructions, everything got stuck at localhost:5000. Basically, my apps were only visible on my computer. So, I went on a quest to figure out how to make it accessible to everyone.</p>]]></description><link>https://blog.neurotech.africa/deploying-flask-app-in-digital-ocean-droplet/</link><guid isPermaLink="false">65d133475a0e5405410da867</guid><dc:creator><![CDATA[Edgar Gulay]]></dc:creator><pubDate>Sat, 17 Feb 2024 22:34:24 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/02/Default_Rocket_launching_from_midle_of_the_ocean_docker_with_h_0.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/02/Default_Rocket_launching_from_midle_of_the_ocean_docker_with_h_0.jpg" alt="Deploying Flask App in Digital Ocean Droplet"><p><br>Recently, I read a blog post about customizing APIs from Gemini Models, but when I tried following the instructions, everything got stuck at localhost:5000. Basically, my apps were only visible on my computer. So, I went on a quest to figure out how to make it accessible to everyone. That&apos;s when I learned about deploying Flask apps on a Digital Ocean droplet. This guide is like a map to help you do the same, so your projects don&apos;t get stuck at localhost:5000 either. Let&apos;s share your awesome creations with the world! &#x2728; It&apos;s time to unleash them onto the global stage by mastering the art of deploying Flask applications on a Digital Ocean droplet or any <strong>Ubuntu Server.</strong></p><hr><p>Step by step &#xA0;guide to deploying Flask app in digital ocean droplet using gunicorn and nginx.</p><p>Things we are going to cover</p><ol><li>Creating digital ocean droplet</li><li>Preparing environment</li><li>Actual deployment guide</li></ol><hr><h2 id="creating-digital-ocean-droplet">Creating digital ocean droplet</h2><p>Go to digital ocean and create an account if you don&apos;t have one. here is the link <br> &#x2728; <a href="https://m.do.co/c/a8690363c67d">Digital Ocean</a> &#x1F389; ~ This is a referral link, you will get <strong>$200</strong> credit for <strong>60 days</strong>. so no more excuses to why your projects are not deployed.</p><p>After creating an account, click on the create button and select droplets. You will be presented with a list of options. Select the Ubuntu 20.04 LTS. Choose the plan of your preference. You can choose any data center region you want. I usually choose the one closest to me.</p><p>Choose the authentication method. SSH keys are the best but for this guide, we are going to use a password. You can always add SSH keys later.</p><p>if you are a windows user get yourself a virtual box and install Ubuntu. or use WSL &#x1F602;.</p><h2 id="preparing-environment">Preparing environment</h2><p>Now that we have our droplet up and running, let&apos;s prepare our environment.<br>connect to your droplet using ssh.</p><pre><code class="language-bash">ssh root@your_droplet_ip
</code></pre><p>Provide your password and you are in.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/02/image-1.png" class="kg-image" alt="Deploying Flask App in Digital Ocean Droplet" loading="lazy" width="1397" height="627" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/02/image-1.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/02/image-1.png 1000w, https://blog.neurotech.africa/content/images/2024/02/image-1.png 1397w" sizes="(min-width: 720px) 720px"></figure><h3 id="update-and-upgrade">Update and upgrade</h3><pre><code class="language-bash">sudo apt update
sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools
</code></pre><p>the command sudo apt update refreshes the list of available software packages on your system, ensuring that it has the latest information from the repositories. Following that, sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools installs essential tools and dependencies for Python development. This includes the Python 3 package manager (python3-pip), development headers for Python (python3-dev), fundamental build tools (build-essential), as well as libraries for SSL and FFI (libssl-dev and libffi-dev), and the Python package distribution tools (python3-setuptools).</p><h3 id="create-a-virtual-environment">Create a virtual environment</h3><pre><code class="language-bash">sudo apt install python3-venv
</code></pre><p>Create a directory for your project and navigate to it.</p><pre><code class="language-bash">mkdir apps
cd apps
</code></pre><p>Create a virtual environment</p><pre><code class="language-bash">python3 -m venv venv
</code></pre><p>Activate the virtual environment</p><pre><code class="language-bash">source venv/bin/activate
</code></pre><h3 id="clone-your-project">Clone your project</h3><p>let&apos;s call our project <code>flask_app</code></p><pre><code class="language-bash">git clone your_project_url.git
cd flask_app
</code></pre><h3 id="install-project-dependencies">Install project dependencies</h3><pre><code class="language-bash">pip install -r flask_app/requirements.txt
</code></pre><p>now that we have your project up and running, let&apos;s run it and see if everything is working as expected.<br>first lets allow the port we are going to use.</p><pre><code class="language-bash">sudo ufw allow 5000

</code></pre><p>run the app</p><pre><code class="language-bash">python app.py
</code></pre><p>if everything is working as expected, you should be able to access your app using your droplet ip and port 5000.<br>something like this <code>http://your_droplet_ip:5000</code></p><h2 id="actual-deployment-guide">Actual deployment guide</h2><h3 id="create-wsgi-entry-point">Create WSGI entry point</h3><p>Create a file called <code>wsgi.py</code> in your project root directory.<br>or just copy app.py to wsgi.py</p><pre><code class="language-bash">cp app.py wsgi.py
</code></pre><h3 id="configure-gunicorn">Configure Gunicorn</h3><p>Safe check if gunicorn is installed</p><pre><code class="language-bash">gunicorn --version
</code></pre><p>if not installed install it using pip</p><pre><code class="language-bash">pip install gunicorn
</code></pre><p>try running your app using gunicorn</p><pre><code class="language-bash">gunicorn --bind 0.0.0.0:5000 wsgi:app
</code></pre><p>Check your app using your droplet ip and port 5000. something like this <code>http://your_droplet_ip:5000</code></p><p>if everything is working as expected, let&apos;s move to the next step.</p><h3 id="create-a-systemd-service-file">Create a systemd service file</h3><p>Create a systemd service file for gunicorn, this will allow gunicorn to automatically start on boot.<br>first deactivate your virtual environment</p><pre><code class="language-bash">deactivate
</code></pre><p>Then create a file called <code>flask_app.service</code> in <code>/etc/systemd/system/</code> directory.</p><pre><code class="language-bash">sudo nano /etc/systemd/system/flask_app.service
</code></pre><p>Add the following configuration to the file</p><pre><code class="language-bash">[Unit]
Description=Gunicorn instance to serve flask_app
After=network.target

[Service]
User=root
Group=www-data
WorkingDirectory=/root/apps/flask_app
Environment=&quot;PATH=/root/apps/venv/bin&quot;
ExecStart=/root/apps/venv/bin/gunicorn --workers 3 --bind 0.0.0.0:5000 -m 777 wsgi:app

[Install]
WantedBy=multi-user.target
</code></pre><p>After creating the file, start the gunicorn service and enable it to start on boot.</p><pre><code class="language-bash">sudo systemctl start flask_app
sudo systemctl enable flask_app
</code></pre><p>Check the status of the service to make sure it&apos;s running without any issues.</p><pre><code class="language-bash">sudo systemctl status flask_app
</code></pre><p>example output</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/02/image-2.png" class="kg-image" alt="Deploying Flask App in Digital Ocean Droplet" loading="lazy" width="1226" height="261" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/02/image-2.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/02/image-2.png 1000w, https://blog.neurotech.africa/content/images/2024/02/image-2.png 1226w" sizes="(min-width: 720px) 720px"></figure><h3 id="configure-nginx-to-proxy-requests">Configure Nginx to Proxy Requests</h3><p>Install Nginx</p><pre><code class="language-bash">sudo apt install nginx
</code></pre><p>Create a new server block configuration file in Nginx&apos;s <code>sites-available</code> directory.</p><pre><code class="language-bash">sudo nano /etc/nginx/sites-available/flask_app
</code></pre><p>Add the following configuration to the file. Replace <code>your_domain_or_ip</code> with your actual domain name or IP address.</p><pre><code class="language-bash">server {
    listen 80;
    server_name 143.198.232.28;

    location / {
        proxy_pass https://143.198.232.28:5000;
    }
}
</code></pre><p>Create a symbolic link to the file in the <code>sites-enabled</code> directory.</p><pre><code class="language-bash">sudo ln -s /etc/nginx/sites-available/flask-app /etc/nginx/sites-enabled
</code></pre><p>Test your Nginx configuration for syntax errors.</p><pre><code class="language-bash">sudo nginx -t
</code></pre><p>you should see something like this</p><pre><code class="language-bash">nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
</code></pre><p>If the test is successful, restart Nginx.</p><pre><code class="language-bash">sudo systemctl restart nginx
</code></pre><p>Remember we allowed port 5000 earlier, now we can remove it.</p><pre><code class="language-bash">sudo ufw delete allow 5000
</code></pre><p>Now you should be able to access your app using your domain name or IP address without the port number. something like this <code>http://your_droplet_ip</code></p><h3 id="secure-your-app-with-ssl">Secure your app with SSL</h3><p>Install certbot</p><pre><code class="language-bash">sudo apt install certbot python3-certbot-nginx
</code></pre><p>Obtain a free SSL certificate for your domain using certbot if you have a domain.</p><pre><code class="language-bash">sudo certbot --nginx -d your_domain_or_ip
</code></pre><p>Certbot will ask you to provide an email address for lost key recovery and notices, and to agree to the terms of service. After doing so, certbot will communicate with the Let&apos;s Encrypt server, then run a challenge to verify that you control the domain you&apos;re requesting a certificate for.</p><p>When that&apos;s finished, certbot will ask how you&apos;d like to configure your HTTPS settings.</p><p>or you can use openssl to generate a self-signed certificate.</p><pre><code class="language-bash">sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/flask_app.key -out /etc/ssl/certs/flask_app.crt
</code></pre><p>Create a new server block configuration file in Nginx&apos;s <code>sites-available</code> directory.</p><pre><code class="language-bash">sudo nano /etc/nginx/sites-available/flask_app
</code></pre><p>Add the following configuration to the file. Replace <code>your_domain_or_ip</code> with your actual domain name or IP address.</p><pre><code class="language-bash">
server {
    listen 80;
    server_name

    location / {
        proxy_pass https://your_domain_or_ip;
    }
}

server {
    listen 443 ssl;
    server_name your_domain_or_ip;

    ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;

    location / {
        proxy_pass https://your_domain_or_ip;
    }
}
</code></pre><p>This configuration tells Nginx to listen on both port 80 (HTTP) and port 443 (HTTPS). It uses the self-signed certificate and private key that you created.</p><p>After updating the Nginx configuration, remember to test the configuration and reload or restart Nginx:</p><pre><code class="language-bash">sudo nginx -t
sudo systemctl reload nginx
</code></pre><p>The <code>nginx -t</code> command checks the configuration for syntax errors. The <code>systemctl reload nginx</code> command reloads the Nginx configuration without interrupting currently connected clients.</p><p><em>Please note that because the certificate is self-signed, browsers will show a warning to users that the site is not secure. Users will need to manually accept the risk and proceed to the site.</em></p><h3 id="conclusion">Conclusion</h3><p>Hopefully with this template guide, you were able to deploy your flask app in a digital ocean droplet. Your projects are way too precious to be running on localhost. Deploy them and share them with the world. If you have any questions or suggestions, feel free to reach out help is everywhere. &#x2728;</p>]]></content:encoded></item><item><title><![CDATA[Organizing Your Flask Project with Multiple Apps]]></title><description><![CDATA[<h2></h2><p>Flask, a lightweight and flexible web framework for Python, provides a simple and extensible structure for building web applications. As your project grows, organizing your code becomes crucial for maintainability. One effective way to structure your Flask project is by using multiple apps. In this blog post, we&apos;ll</p>]]></description><link>https://blog.neurotech.africa/organizing-your-flask-project-with-multiple-apps/</link><guid isPermaLink="false">65cb4c215a0e5405410da840</guid><dc:creator><![CDATA[Edgar Gulay]]></dc:creator><pubDate>Tue, 13 Feb 2024 21:00:00 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/02/1_0G5zu7CnXdMT9pGbYUTQLQ.png" medium="image"/><content:encoded><![CDATA[<h2></h2><img src="https://blog.neurotech.africa/content/images/2024/02/1_0G5zu7CnXdMT9pGbYUTQLQ.png" alt="Organizing Your Flask Project with Multiple Apps"><p>Flask, a lightweight and flexible web framework for Python, provides a simple and extensible structure for building web applications. As your project grows, organizing your code becomes crucial for maintainability. One effective way to structure your Flask project is by using multiple apps. In this blog post, we&apos;ll explore how to register and organize multiple apps within a Flask project.</p><h2 id="why-multiple-apps">Why Multiple Apps?</h2><p>Breaking down a monolithic Flask application into smaller, modular apps offers several advantages:</p><p><strong>Modularity:</strong> Each app can focus on specific functionality, making it easier to maintain and understand.</p><p><strong>Reusability:</strong> Apps can be reused across projects, promoting code consistency.</p><p><strong>Scalability:</strong> Separating concerns allows for easier scaling as your project expands.</p><h2 id="getting-started">Getting Started</h2><p>You can get a template flask API app at &#xA0;Aida (Flask Rest API) template</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/Aida-LLC/Flask-Rest.git"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - Aida-LLC/Flask-Rest: Flask-Rest is a lightweight and flexible REST API project built using the Flask web framework. It provides a solid foundation for developing RESTful web services, making it easy to create, update, and retrieve data through HTTP requests.</div><div class="kg-bookmark-description">Flask-Rest is a lightweight and flexible REST API project built using the Flask web framework. It provides a solid foundation for developing RESTful web services, making it easy to create, update, ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Organizing Your Flask Project with Multiple Apps"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">Aida-LLC</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://repository-images.githubusercontent.com/755284528/c53ac1ce-5c08-4394-ac74-ba56eb39eea7" alt="Organizing Your Flask Project with Multiple Apps"></div></a></figure><p>Let&apos;s assume you have a basic Flask project with the following structure:</p><pre><code>/my_project
    /app
        /static
        /templates
        __init__.py
        routes.py
    config.py
    run.py
</code></pre><p>Now, let&apos;s create a new app called <code>admin</code> and register it within the project.</p><h2 id="step-1-create-the-new-app">Step 1: Create the New App</h2><p>Create a folder for the new app within the project:</p><pre><code>/my_project
    /app
        /static
        /templates
        __init__.py
        routes.py
    /admin
        __init__.py
        routes.py
    config.py
    run.py
</code></pre><h2 id="step-2-define-routes-for-the-new-app">Step 2: Define Routes for the New App</h2><p>Define the routes for the new app in <code>admin/routes.py</code>:</p><pre><code class="language-python"># admin/routes.py

from flask import Blueprint

admin_bp = Blueprint(&apos;admin&apos;, __name__)

@admin_bp.route(&apos;/dashboard&apos;)
def dashboard():
    return &apos;Admin Dashboard&apos;
</code></pre><h2 id="step-3-register-the-new-app">Step 3: Register the New App</h2><p>In <code>admin/__init__.py</code>, create an instance of <code>Blueprint</code> and register it with the main Flask app:</p><pre><code class="language-python"># admin/__init__.py

from flask import Blueprint

admin_bp = Blueprint(&apos;admin&apos;, __name__)

# Import routes to register them
from . import routes

def init_app(app):
    app.register_blueprint(admin_bp, url_prefix=&apos;/admin&apos;)
</code></pre><h2 id="step-4-update-the-main-app">Step 4: Update the Main App</h2><p>In the main <code>__init__.py</code> file, update it to initialize the new app:</p><pre><code class="language-python"># app/__init__.py

from flask import Flask
from .admin import init_app as init_admin_app

app = Flask(__name__)


# Initialize the admin app
init_admin_app(app)

</code></pre><p>Now, you have successfully registered a new app within your Flask project.</p><h2 id="final-thoughts">final thoughts ...</h2><p>Organizing your Flask project with multiple apps allows for a cleaner and more modular codebase. Each app can encapsulate specific functionality, making the entire project more maintainable and scalable. Following these steps, you can easily extend your project with new apps, ensuring a structured and organized codebase.</p><p>Happy coding!</p><p></p>]]></content:encoded></item><item><title><![CDATA[Large Language Models (The use-case for RAG in Tanzania’s education system)]]></title><description><![CDATA[<p><strong>Can we spread the knowledge and enthusiasm for LLMs far and wide!</strong></p><p>The goal here is to put knowledge into practice.</p><p>On the 8th of February, <a href="https://ailab.co.tz/">Tanzania AI Community</a> hosted an extraordinary event centered on LLM (Large Language Models). If you missed out we&apos;ve got you covered. Here&</p>]]></description><link>https://blog.neurotech.africa/large-language-models/</link><guid isPermaLink="false">65c5a0505a0e5405410da679</guid><dc:creator><![CDATA[MGASA LUCAS]]></dc:creator><pubDate>Sat, 10 Feb 2024 09:58:51 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/02/Screenshot-2024-02-09-065032.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.neurotech.africa/content/images/2024/02/Screenshot-2024-02-09-065032.png" alt="Large Language Models (The use-case for RAG in Tanzania&#x2019;s education system)"><p><strong>Can we spread the knowledge and enthusiasm for LLMs far and wide!</strong></p><p>The goal here is to put knowledge into practice.</p><p>On the 8th of February, <a href="https://ailab.co.tz/">Tanzania AI Community</a> hosted an extraordinary event centered on LLM (Large Language Models). If you missed out we&apos;ve got you covered. Here&apos;s a rundown of the discussions we had.</p><p>First off, let&apos;s talk about the heart of the matter. What were the main topics on the table? </p><p>The event was expertly host <a href="https://www.linkedin.com/in/victor-oldensand">Victor Oldensand</a>, who shed light on the application of retrieval-augmented generation to enhance the competence of generative AI in supporting Tanzanian educators. I want to clarify that I don&apos;t claim ownership of the valuable insights he provided, and any mistakes in this article are solely mine, not his.</p><p>Let me share what I grasped from the presenter and how I attempted to translate that understanding into action through coding challenges and implementations. A heartfelt thank you to Victor for the wealth of experience and knowledge he shared with us. It was truly invaluable.</p><p>let&apos;s kickstart our summary by focusing on these key areas, just as our discussions unfolded.</p><h3 id="large-language-modelsllmsretrieval-augmented-generation-ragevaluating-rag-systems"><strong><em>Large Language Models(LLMs)<br>Retrieval Augmented Generation (RAG)</em></strong><em><br>Evaluating RAG systems</em></h3><p></p><p>Topics up there, are what was discussed, but here we&apos;ll skip some </p><h3 id="ai-in-educationchallenges-and-ethics"><strong><em>AI in Education<br>Challenges and Ethics</em></strong></h3><p></p><p>and dive straight into the coding implementation of what was discussed. Let&apos;s roll up our sleeves and get coding!</p><p><strong><em>what are Large Language models?</em></strong></p><p>Large language models (LLMs) are deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets. These models are based on transformer architecture and are trained using massive textual datasets, enabling them to understand and generate human language. Some examples of popular large language models include GPT-3 and GPT-4 from OpenAI, LLaMA from Meta, and PaLM2 from Google. These models have the shown potential to disrupt various industries and have been adopted for a wide range of applications.</p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/02/Screenshot-2024-02-09-212555.png" class="kg-image" alt="Large Language Models (The use-case for RAG in Tanzania&#x2019;s education system)" loading="lazy" width="932" height="547" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/02/Screenshot-2024-02-09-212555.png 600w, https://blog.neurotech.africa/content/images/2024/02/Screenshot-2024-02-09-212555.png 932w" sizes="(min-width: 720px) 720px"></figure><p>What is <strong><em>Retrieval Augmented Generation (RAG)?</em></strong></p><p>Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models by retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information. It allows LLMs to build on a specialized body of knowledge to answer questions more accurately and provides users with insight into LLMs&apos; generative process. RAG involves two phases: retrieval and content generation, and it extends the capabilities of LLMs to specific domains or an organization&apos;s specialized knowledge. The technique is used to optimize the output of LLMs so that they reference a knowledge base outside of their training data sources. RAG is implemented to ensure that the model has access to the most current, reliable facts and that users have access to the model&#x2019;s sources, ensuring that its claims can be checked for accuracy and ultimately trusted. It also allows models to cite sources, like footnotes in a research paper, so users can check any claims, which builds trust. The technique can be used by nearly any LLM to improve the quality of LLM-generated content. </p><p></p><h3 id="so-how-to-build-an-llm-application-with-rag">So how to Build an LLM application with RAG?</h3><p></p><p>We will build a simple LLM application in Python using the LangChain library. Our RAG application will expand an LLM&apos;s knowledge using private data. In this case, it will be a PDF file containing some text. A doc on Ultrasonics in Physics.</p><h3 id="1-prerequisites">1. Prerequisites</h3><p>At the very beginning, we must install all required modules, that our application will use. Let&#x2019;s write this command in the terminal in the project directory.</p><pre><code class="language-shell">pip install langchain-community==0.0.11 pypdf==3.17.4 langchain==0.1.0 python-dotenv==1.0.0 langchain-openai==0.0.2.post1 faiss-cpu==1.7.4 tiktoken==0.5.2 langchainhub==0.1.14
</code></pre><p><br>Then create a &#x2018;data&#x2019; directory and place the PDF file in it. &#xA0;We must also create a main.py file in the project directory, where we will store the whole code of our application.</p><p>The main file will look like this.</p><pre><code class="language-python">def main():
  print(&quot;Everything will written over here!&quot;)

if __name__ == &quot;__main__&quot;: 
  main()</code></pre><p></p><h3 id="2-load-the-pdf-file-into-the-application">2. Load the PDF file into the application.</h3><p>We will use a document loader provided by LangChain called PyPDFLoader.</p><pre><code class="language-python">from langchain_community.document_loaders import PyPDFLoader

pdf_path = &quot;./data/Ultrasonics.pdf&quot;

def main():
  loader = PyPDFLoader(file_path=pdf_path)
  documents = loader.load()
  print(documents) 

if __name__ == &quot;__main__&quot;: 
  main()</code></pre><p></p><p>First, we should <strong><strong>create an instance of the PyPDFLoader object</strong></strong> where we pass the path to our file. The next step is to simply <strong><strong>call the load function on this object</strong></strong> and save the loaded file in the documents variable. It will be an array consisting of Document objects, where each of these objects is a representation of one page of our file.</p><p>The print() function should output an array similar to this:</p><pre><code>[Document(page_content=&apos;[...]&apos;, metadata={&apos;source&apos;: pdf_path, page: 1}), Document(page_content=&apos;[...]&apos;, metadata={&apos;source&apos;: pdf_path, page: 2}), ...]
</code></pre><p><br>3. Splitting document into smaller chunks</p><p>We don&#x2019;t want to send a whole document as a context with our query to the LLM. To split the document, we will use a class provided by LangChain called CharacterTextSplitter, which we can import from the Langchain library:</p><pre><code class="language-python">from langchain.text_splitter import CharacterTextSplitter
</code></pre><p>Then we can create an instance of it and <strong><strong>call the split_documents() function</strong></strong>, passing our loaded documents as a parameter.</p><pre><code class="language-python">def main():
  loader = PyPDFLoader(file_path=pdf_path) 
  documents = loader.load() 
  text_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=50, separator=&quot;\n&quot; ) 
  docs = text_splitter.split_documents(documents)
</code></pre><p>Let&apos;s briefly describe what&apos;s going on here.</p><p>First, we are creating a CharacterTextSplitter object, which takes several parameters:</p><ul><li><strong><strong>chunk_size</strong></strong> - defines the maximum size of a single chunk measured in tokens.</li><li><strong><strong>chunk_overlap</strong></strong> - defines the size of overlap between chunks. This helps to preserve the meaning of the split text by ensuring that chunks are not split in a way that would distort their meaning.</li><li><strong><strong>separator</strong></strong> - defines the separator that will be used to delineate our chunks.</li></ul><p>In the docs variable, we will get an array of Document objects - the same as from the load() function of the PyPDFLoader class. But this time, this array will contain more elements because we have split them.</p><h3 id="4-prepare-environment-variables-and-api-key-to-store-it-there">4. Prepare environment variables and API Key to store it there</h3><p>&#x200B;&#x200B;The next step will be <strong><strong>converting these chunks into numeric vectors and storing them in a vector database.</strong></strong> This process is called embedding, we will talk about embedding next time, so we won&apos;t go into detail about it now.</p><p>For the embedding process, we need an external embedding model. We will use OpenAI embeddings for this purpose. To do that, we have to generate an OpenAI API key. <br>But before that, we have to create a .env file where we will store this key.</p><p>Now, we need to create an account on the <a href="https://platform.openai.com/docs/overview" rel="nofollow noopener noreferrer">platform.openai.com/docs/overview</a> page. &#xA0;Afterward, we should generate an API key on the <a href="https://platform.openai.com/api-keys" rel="nofollow noopener noreferrer">platform.openai.com/api-keys</a> page by creating a new secret key.</p><p>Copy the secret key and paste it into the .env file like this:</p><pre><code>OPENAI_API_KEY=your_key
</code></pre><p>paste your openai key on &quot;your_key&quot;</p><p>Okay, let&#x2019;s load environment variables into our project by importing the load_dotenv function:</p><pre><code class="language-python">from dotenv import load_dotenv
</code></pre><p>And call it at the very beginning of the main function:</p><pre><code class="language-python">def main(): 
	load_dotenv()
	loader = PyPDFLoader(file_path=pdf_path) 
	documents = loader.load() 
	text_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=50, separator=&quot;\n&quot; ) 
	docs = text_splitter.split_documents(documents)
</code></pre><h3 id="5-implementing-the-embedding-process">5. Implementing the embedding process</h3><p>First, we have to import the OpenAIEmbeddings class:</p><pre><code class="language-python">from langchain_openai import OpenAIEmbeddings
</code></pre><p>Then we should create an instance of this class. Let&#x2019;s assign it to the &apos;embeddings&apos; variable like this:</p><pre><code class="language-python">embeddings = OpenAIEmbeddings()
</code></pre><h3 id="6-setting-up-local-vector-databasefaiss">6. Setting up local vector database - FAISS</h3><p>We have loaded and prepared our file, and we have also created an object instance for the embedding model. <strong><strong>We are now ready to transform our chunks into numeric vectors and save them in a vector database.</strong></strong> We will keep all our data locally using the FAISS vector database. Facebook AI Similarity Search (Faiss) is a tool designed by Facebook AI for effective similarity search and clustering of dense vectors.</p><p>First, we need to import the FAISS instance:</p><pre><code class="language-python">from langchain_community.vectorstores.faiss import FAISS
</code></pre><p>And implement the process of converting and saving embeddings:</p><pre><code class="language-python">def main(): 
	load_dotenv() 
	loader = PyPDFLoader(file_path=pdf_path) 
	documents = loader.load() 
	text_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=50, separator=&quot;\n&quot; ) 
	docs = text_splitter.split_documents(documents) 
	embeddings = OpenAIEmbeddings() 
	vectorstore = FAISS.from_documents(docs, embeddings)    
	vectorstore.save_local(&quot;vector_db&quot;)
</code></pre><p>We have added two lines to our code. The first line takes our split chunks (docs) and the embeddings model to convert the chunks from text to numeric vectors. After that, we are saving the converted data locally in the &apos;vector_db&apos; directory.</p><h3 id="7-creating-a-prompt">7. Creating a prompt</h3><p>For preparing a prompt we will use a &apos;langchain&apos; hub. We will pull a prompt called &apos;langchain-ai/retrieval-qa-chat&apos; from there. This prompt is specially designed for our case, allowing us to ask the model about things from the provided context. Under the hood, the prompt looks like this:</p><pre><code>Answer any use questions based solely on the context below:
&lt;context&gt; 
{context}
&lt;/context&gt;
</code></pre><p>Let&#x2019;s import a hub from the &apos;langchain&apos; library:</p><pre><code class="language-python">from langchain import hub
</code></pre><p>Then, simply use the &apos;pull()&apos; function to retrieve this prompt from the hub and store it in a variable:</p><pre><code class="language-python">retrieval_qa_chat_prompt = hub.pull(&quot;langchain-ai/retrieval-qa-chat&quot;)
</code></pre><h3 id="8-setting-up-a-large-language-model">8. Setting up a large language model</h3><p>Great. The next thing <strong><strong>we&apos;ll need is a large language model</strong></strong> - in our case, it will be one of the OpenAI models. Again, we need an OpenAI key but we have already set up it along with the embeddings, so we don&apos;t need to do it again.</p><p>Let&apos;s go ahead and import the model:</p><pre><code class="language-python">from langchain_openai import ChatOpenAI, OpenAIEmbeddings
</code></pre><p>And assign it to a variable in our main function:</p><pre><code class="language-python">llm = ChatOpenAI()
</code></pre><h3 id="9-retrieve-context-data-from-the-database">9. Retrieve context data from the database</h3><p>Okay, we have finished preparing the vector database, embeddings, and LLM (large language model). Now, <strong><strong>we need to connect everything using chains.</strong></strong> We will need two types of chains provided by &apos;langchain&apos; for that.</p><p>The first one is the &apos;create_stuff_documents_chain,&apos; which we need to import from the &apos;langchain&apos; library:</p><pre><code class="language-python">from langchain.chains.combine_documents import create_stuff_documents_chain
</code></pre><p>Next, pass our large language model (LLM) and prompt to it.</p><pre><code class="language-python">combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)
</code></pre><p>This function returns a Runnable object, which requires a context parameter. Running it will look like this:</p><pre><code class="language-python">combine_docs_chain.invoke({&quot;context&quot;: docs, &quot;input&quot;: &quot;What is piezo effect?&quot;})
</code></pre><h3 id="10-retrieve-only-the-relevant-data-as-a-context">10. Retrieve only the relevant data as a context</h3><p>Generally, it will work, we need to pass only the information related to our query as the context. We will achieve this by combining this chain with another one, which will retrieve only the chunks important to us from the database and automatically add them as context to the prompt.</p><p>Let&apos;s import that chain from the &apos;langchain&apos; library:</p><pre><code class="language-python">from langchain.chains import create_retrieval_chain 
</code></pre><p>First, we need to prepare our database as a retriever, which will enable semantic search for the chunks that are relevant to our query.</p><pre><code class="language-python">retriever = FAISS.load_local(&quot;vector_db&quot;, embeddings).as_retriever()
</code></pre><p>So, we load our directory where we store the chunks converted to vectors and pass it to an embeddings function. In the end, we return it as a retriever.</p><p>Now, we can combine our chains:</p><pre><code class="language-python">retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain)
</code></pre><p>Under the hood, it will retrieve relevant chunks from the database and add them to our prompt as context. All we have to do now is <strong><strong>invoke this chain with our query as an input parameter:</strong></strong></p><pre><code class="language-python">response = retrieval_chain.invoke({&quot;input&quot;: &quot;What is piezo effect?&quot;})
</code></pre><p>As a response, we will receive an object with three variables:</p><ul><li><strong><strong>input</strong></strong> - our query;</li><li><strong><strong>context</strong></strong> - an array of documents (chunks) that we have passed as context to the prompt;</li><li><strong><strong>answer</strong></strong> - the answer to our query generated by the large language model (LLM).</li></ul><p>Let&#x2019;s print out the &quot;answer&quot; property:</p><pre><code class="language-python">print(response[&quot;answer&quot;])
</code></pre><p>Our printed answer looks as follows:</p><blockquote>The piezo effect is a phenomenon in which certain materials, such as quartz, tourmaline, and Rochelle salt, generate an electric charge when subjected to pressure or mechanical stress. This means that if pressure is applied to one pair of opposite faces of a crystal, electric charge develops on the other pair of opposite faces. The piezo effect is used in piezo-electric generators or oscillators to produce ultrasonic waves.</blockquote><p>Looks pretty nice :)</p><h3 id="10-you%E2%80%99ve-made-it-our-llm-app-is-ready"><em>10. You&#x2019;ve made it! Our LLM app is ready</em></h3><p>We have extended the knowledge base of the LLM model with data from an Ultrasonic.pdf file. The model is now able to answer our questions based on the context that we have provided in the prompt.</p><p>Below is the whole piece of code.</p><pre><code class="language-python">from dotenv import load_dotenv
from langchain import hub
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_community.vectorstores.faiss import FAISS

pdf_path = &quot;./data/Ultrasonics.pdf&quot;


def main():
    load_dotenv()

    loader = PyPDFLoader(file_path=pdf_path)
    documents = loader.load()

    text_splitter = CharacterTextSplitter(
        chunk_size=1000, chunk_overlap=50, separator=&quot;\n&quot;
    )
    docs = text_splitter.split_documents(documents)

    embeddings = OpenAIEmbeddings()

    vectorstore = FAISS.from_documents(docs, embeddings)
    vectorstore.save_local(&quot;vector_db&quot;)

    retrieval_qa_chat_prompt = hub.pull(&quot;langchain-ai/retrieval-qa-chat&quot;)

    llm = ChatOpenAI()

    combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)

    retriever = FAISS.load_local(&quot;vector_db&quot;, embeddings).as_retriever()
    retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain)

    response = retrieval_chain.invoke(
        {&quot;input&quot;: &quot;What is piezo effect&quot;}
    )

    print(response[&quot;answer&quot;])


if __name__ == &quot;__main__&quot;:
    main()</code></pre><p></p><h3 id="11-more-tips"><em>11: More Tips.</em></h3><p>&#x25CF; Data pipeline for RAG try (<a href="https://www.llamaindex.ai/">LlammaIndex</a>)</p><p>&#x25CF; Vector database for rapid prototyping try (<a href="https://www.trychroma.com/">ChromaDB</a>) </p><p> &#x25CF; And there are many open source models found on Hugging Face </p><p>&#x25CF; <a href="https://docs.pinecone.io/docs/trulens">TruLens</a> for RAG evaluation</p><p></p><p>Well, let me tell you, &#xA0;We delved into various aspects of LLMs and the RAG concept, exploring its potential. Even if you couldn&apos;t make it to the event, don&apos;t worry &#x2013; we&apos;ve got some insightful nuggets to share.</p><p>Stay tuned as we unpack the essence of our discussions, providing valuable insights for AI enthusiasts, whatever place you will be, or anywhere else in Tanzania just build and that&apos;s what I was inspired to share.</p><p>I can&apos;t wait to see what you are building next.</p><p>For any feedback and chat, let&apos;s connect: <a href="https://twitter.com/MgasaLucas">Mgasa Lucas</a></p><p>Until next time, stay safe.</p>]]></content:encoded></item><item><title><![CDATA[Mastering Swahili News Classification with LSTM: A Step-by-Step Guide]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2><p>In the dynamic and broad field of artificial intelligence, the <strong><a href="https://github.com/UDSM-AI">UDSM AI Community</a></strong> continues to be a fostering place of innovation, exploration, and collaborative learning. In our latest session, we embarked on an exciting journey into the realm of natural language processing, specifically focusing on Swahili news classification using</p>]]></description><link>https://blog.neurotech.africa/mastering-swahili-news-classification-with-lstm-a-step-by-step-guide/</link><guid isPermaLink="false">65a6bef25a0e5405410da4d1</guid><category><![CDATA[AI]]></category><category><![CDATA[nlp]]></category><category><![CDATA[classification]]></category><category><![CDATA[models]]></category><category><![CDATA[swahili]]></category><category><![CDATA[swahili dataset]]></category><dc:creator><![CDATA[Gabriel D Minzemalulu]]></dc:creator><pubDate>Thu, 18 Jan 2024 05:35:56 GMT</pubDate><media:content url="https://blog.neurotech.africa/content/images/2024/01/lstm-5--1-.jpeg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://blog.neurotech.africa/content/images/2024/01/lstm-5--1-.jpeg" alt="Mastering Swahili News Classification with LSTM: A Step-by-Step Guide"><p>In the dynamic and broad field of artificial intelligence, the <strong><a href="https://github.com/UDSM-AI">UDSM AI Community</a></strong> continues to be a fostering place of innovation, exploration, and collaborative learning. In our latest session, we embarked on an exciting journey into the realm of natural language processing, specifically focusing on Swahili news classification using Long Short-Term Memory (LSTM) networks.</p><p>Our collective attempt is not merely a technical quest but a proof to our commitment to pushing the boundaries of AI understanding. And we chose to go with Swahili, a language rich in history and culture, to serves as our medium to explore how we can understand text (news) classification, demonstrating how advanced techniques can unravel meaningful insights from diverse sources.</p><h2 id="session-objective">Session Objective</h2><p>Our main goal in this session was twofold. First, we wanted to walk you through a real hands-on activity, neatly packed into a Jupyter notebook. This activity is all about using LSTMs to explore how we can use different tokenizers with Swahili text. </p><p>Second, we want this to be a space where everyone can join in. We&apos;re all about teamwork here. So, we&apos;re encouraging you to jump into the code, ask questions, and share your thoughts. Together, we&apos;re going to get better at understanding how to play with tokenizers and make sense of the text using our NLP skills.</p><p>As we dive into the technical bits, remember, it&apos;s not just about the code; it&apos;s about growing together, helping each other, and making our AI community even more awesome.</p><h2 id="technical-implementation">Technical Implementation</h2><p>In this technical implementation, we explore the process of Swahili news classification using Long Short-Term Memory (LSTM) networks. The project is a collaborative effort within the UDSM AI Community, aiming to showcase effective Swahili news classification techniques to the community members. The focal point of the session was a Jupyter notebook that encapsulated a hands-on technical activity, specifically implementing a Swahili news classification model. This showcased cutting-edge technologies and methodologies in natural language processing. The code was carefully made to correctly classify Swahili news articles, demonstrating best practices in text classification within the field of machine learning.</p><p>The session encouraged everyone to check out the Jupyter notebook and understand the code. We&apos;ll talk about specific parts of the code here, explaining important methods and steps. Here are the things we did: -</p><h3 id="1-importing-libraries">1. Importing Libraries</h3><p>The journey into Swahili news classification began with the importation of essential libraries. This ensured that the tools needed for data manipulation and model construction are at our fingertips.</p><!--kg-card-begin: markdown--><pre><code class="language-python"># Import necessary libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.preprocessing import OneHotEncoder

from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences

</code></pre>
<!--kg-card-end: markdown--><h3 id="2-loading-datasets">2. Loading Datasets</h3><p>To lay the groundwork for our Swahili news classification project, we load the dataset from a CSV file. The dataset, sourced from various Swahili news websites, will be our playground for training and testing the LSTM model</p><!--kg-card-begin: markdown--><pre><code class="language-python"># Load your dataset
data_path = &apos;data/SwahiliNewsClassificationDataset.csv&apos;
df = pd.read_csv(data_path)
</code></pre>
<!--kg-card-end: markdown--><h3 id="3-data-preprocessing">3. Data Preprocessing</h3><p>Before feeding the data into the model, several preprocessing steps are applied.</p><p><strong>A: Normalising Data</strong></p><p>The first step in preparing our dataset involves normalising the textual data. This includes removing punctuation, numbers, special characters, stop words, and lemmatising words. The result is a clean and standardised text column.</p><!--kg-card-begin: markdown--><pre><code class="language-python">import re
def normalize_text(text):
    # Remove punctuation, numbers, and special characters
    text = re.sub(r&apos;[^a-zA-Z\s]&apos;, &apos;&apos;, text)
    # Convert to lowercase
    text = text.lower()
    return text
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python"># Normalize the text column
df[&apos;Text&apos;] = df[&apos;content&apos;].apply(normalize_text)

texts = df[&apos;Text&apos;].values
labels = df[&apos;category&apos;].values

texts[0], labels[0]
</code></pre>
<!--kg-card-end: markdown--><p><strong>B: Converting Labels to Numerical Formats</strong></p><p>With the textual data normalised, the next phase is converting the categorical labels into numerical formats. This facilitates the use of these labels in our LSTM model.</p><!--kg-card-begin: markdown--><pre><code class="language-python"># one hot encode the labels
encoder = OneHotEncoder(sparse=False)
labels = encoder.fit_transform(labels.reshape(-1, 1))

encoder.categories_
</code></pre>
<!--kg-card-end: markdown--><h3 id="4-splitting-the-dataset">4. Splitting the Dataset</h3><p>To ensure the robustness of our LSTM model, we split the dataset into training and testing sets. This segregation allows us to train the model on one subset and evaluate its performance on another.</p><!--kg-card-begin: markdown--><pre><code class="language-python"># Split the dataset into training and testing sets
train_texts, test_texts, train_labels, test_labels = train_test_split(texts, labels, test_size=0.2, random_state=42)
</code></pre>
<!--kg-card-end: markdown--><h3 id="5-tokenising-and-padding-sequences">5. Tokenising and Padding Sequences</h3><p>In this critical phase, we tokenise the words and pad the sequences to prepare the data for the LSTM model. The tokenisation process involves converting words into numerical values, while padding ensures uniformity in the length of our sequences.</p><!--kg-card-begin: markdown--><pre><code class="language-python">max_words = 1000
max_len = 200

tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(train_texts)

train_sequences = tokenizer.texts_to_sequences(train_texts)
test_sequences = tokenizer.texts_to_sequences(test_texts)

train_data = pad_sequences(train_sequences, maxlen=max_len)
test_data = pad_sequences(test_sequences, maxlen=max_len)
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python">train_data[0].shape, train_labels[0].shape
</code></pre>
<!--kg-card-end: markdown--><h3 id="6-building-the-model">6. Building the Model</h3><p>The heart of our Swahili news classification project lies in the construction of the LSTM model. With layers of embedding, LSTM, and dense structures, we compile the model to enable its training on our preprocessed data.</p><!--kg-card-begin: markdown--><pre><code class="language-python">embedding_dim = 50  # Adjust based on your preferences

model = Sequential()
model.add(Embedding(input_dim=max_words, output_dim=embedding_dim, input_length=max_len))
model.add(LSTM(100))
model.add(Dense(6, activation=&apos;softmax&apos;))


# Compile the model
model.compile(optimizer=&apos;adam&apos;, loss=&apos;categorical_crossentropy&apos;, metrics=[&apos;accuracy&apos;])
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python">batch_size = 32
epochs = 10

history = model.fit(train_data, train_labels, validation_split=0.2, batch_size=batch_size, epochs=epochs)

# plot loss and accuracy
import matplotlib.pyplot as plt
plt.plot(history.history[&apos;loss&apos;], label=&apos;train&apos;)
plt.plot(history.history[&apos;val_loss&apos;], label=&apos;test&apos;)
plt.legend()
plt.show()

plt.plot(history.history[&apos;accuracy&apos;], label=&apos;train&apos;)
plt.plot(history.history[&apos;val_accuracy&apos;], label=&apos;test&apos;)
plt.legend()
plt.show()
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python"># evaluate the model
loss, accuracy = model.evaluate(test_data, test_labels, verbose=0)
print(&apos;Accuracy: %f&apos; % (accuracy*100))
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python"># save the model
model.save(&apos;models/starter_swahili_news_classification_model.h5&apos;)
</code></pre>
<!--kg-card-end: markdown--><h3 id="7-inferencing-the-model">7. Inferencing the Model</h3><p>As we conclude our journey, we put our trained model to the test by inferring its predictions on a randomly selected Swahili news headline. The process involves pre-processing the input text and utilising the model to make predictions.</p><!--kg-card-begin: markdown--><pre><code class="language-python"># pre-process for inference
def pre_process(tokenizer, max_len, input_text):
    input_sequence = tokenizer.texts_to_sequences([input_text])
    input_data = pad_sequences(input_sequence, maxlen=max_len)
    return input_data
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python">def classify_news(model, tokenizer, encoder, max_len, input_text):
    input_data = pre_process(tokenizer, max_len, input_text)
    pred = model.predict(input_data)
    # for each input sample, the model returns a vector of probabilities
    # return all classes with their corresponding probabilities
    result_dict = {}

    for i, category in enumerate(encoder.categories_[0]):
        result_dict[category] = str(round(pred[0][i] * 100, 2))+&apos;%&apos;

    highest_prob = max(result_dict, key=result_dict.get)

    return (result_dict, highest_prob)
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-python"># pick a random news headline from df
news_ = df.sample(1)

news_headline = news_[&apos;Text&apos;].values[0]
news_category = news_[&apos;category&apos;].values[0]

result_dict, classified_label = classify_news(model, tokenizer, encoder, max_len, news_headline)

print(f&apos;News headline: {news_headline}&apos;)
print(f&apos;Actual category: {news_category}&apos;)
print(f&apos;Predicted category: {classified_label}&apos;)
print(f&apos;Confidence scores: {result_dict}&apos;)
</code></pre>
<!--kg-card-end: markdown--><h2 id="challenges-encountered">Challenges Encountered</h2><p>Despite our collective efforts in the session, we ran into some tricky situations. The Swahili language posed a unique challenge due to its complexity, and there wasn&apos;t much research available on how to break down its words. We had to rely on methods designed for English, which may not be the best fit for Swahili. Additionally, finding information tailored to the specific structure and rules of Swahili proved to be quite difficult. Nevertheless, our community faced these challenges head-on. Instead of seeing them as roadblocks, we turned them into opportunities to learn and improve. Our shared determination and collaborative spirit continue to shape a supportive and dynamic learning environment within the UDSM AI Community.</p><h2 id="outcomes">Outcomes</h2><p>As a result of our collective efforts, participants gained valuable insights into tokenisation and various tokenisation methods available. The session served as a rich learning experience, expanding our understanding of how to break down and process words effectively.</p><p>Moreover, after training the model, we achieved a notable milestone with a maximum accuracy of 83.78%. This success reflects the effectiveness of our collaborative exploration and dedication during the session. For a visual representation of our progress, detailed screenshots and graphs showcasing the training outcomes are provided below. These visual aids offer a transparent and comprehensive view of our achievements, highlighting the strides we&apos;ve made in understanding Swahili news classification using LSTM.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/01/image-2.png" class="kg-image" alt="Mastering Swahili News Classification with LSTM: A Step-by-Step Guide" loading="lazy" width="547" height="418"><figcaption>train -&gt; loss, test -&gt; val_loss</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/01/image-3.png" class="kg-image" alt="Mastering Swahili News Classification with LSTM: A Step-by-Step Guide" loading="lazy" width="565" height="413"><figcaption>train -&gt; accuracy, test -&gt; val_accuracy</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/01/image-4.png" class="kg-image" alt="Mastering Swahili News Classification with LSTM: A Step-by-Step Guide" loading="lazy" width="2000" height="282" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/01/image-4.png 600w, https://blog.neurotech.africa/content/images/size/w1000/2024/01/image-4.png 1000w, https://blog.neurotech.africa/content/images/size/w1600/2024/01/image-4.png 1600w, https://blog.neurotech.africa/content/images/2024/01/image-4.png 2280w" sizes="(min-width: 720px) 720px"><figcaption>Accuracy</figcaption></figure><h2 id="conclusion">CONCLUSION</h2><p>This step-by-step guide provides a comprehensive overview of the technical implementation, offering both beginner and experienced community members an instructive roadmap for Swahili news classification using LSTM. Each part contributes to the general understanding of the project, from data preprocessing to model inferencing.</p><p>In conclusion, the session stood as proof of our community&apos;s strength and commitment to sharing knowledge. Guided by the expertise of our members, <a href="https://twitter.com/eddiegulay">Edgar Gulay</a>, who led a hands-on task through a Jupyter notebook, everyone had the chance to contribute key ideas and actively engage in the session. As the community looks forward to future sessions, the spirit of collaboration and exploration remains at the forefront.</p>]]></content:encoded></item><item><title><![CDATA[Intro To LLMs]]></title><description><![CDATA[Welcome to the introduction series on LLM.]]></description><link>https://blog.neurotech.africa/intro-to-llms/</link><guid isPermaLink="false">659569fd5a0e5405410da3da</guid><dc:creator><![CDATA[Gabriel D Minzemalulu]]></dc:creator><pubDate>Sun, 14 Jan 2024 15:19:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1698729747139-354a8053281f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fExMTXN8ZW58MHx8fHwxNzA0MjkxMTU2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1698729747139-354a8053281f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fExMTXN8ZW58MHx8fHwxNzA0MjkxMTU2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Intro To LLMs"><p>Have you ever wondered what LLMs are and why they are so popular? You are not alone. Many people have heard of LLMs, but few actually know what they entail, how they benefit, or how to pursue them. In fact, there is a humorous post on LinkedIn that captures this confusion perfectly:</p><blockquote>LLMs are like trendy diets, everyone talks about them, no one really knows how to do them, everyone thinks everyone else is doing them, so everyone claims they are doing them.</blockquote><p>Unlike trendy diets, LLMs are not a passing trend. They are a prestigious and worthwhile academic degree that can boost your skills and career prospects. In this article, I will give you a brief introduction to LLMs, and explain what they are, why they are popular, how they work and how to actually use them. </p><p><strong>INTRODUCTION</strong></p><p>Large Language Models (LLMs) are advanced artificial intelligence (AI) systems that can understand and generate human-like language. They can perform various language tasks, such as answering questions, summarising text, translating languages, and even composing poetry. LLMs have revolutionised the field of natural language processing (NLP) and opened up new possibilities for various industries and applications. In this article, we will explore what LLMs are, how they work, what they can do, and what challenges and limitations they face.</p><p>A Language Model (LLM) basically comprises two essential files within a designated hypothetical directory. One file holds the parameters, known as the parameter file, while the other is the executable file responsible for running the parameters of the Neural Network constituting the language model.</p><p>For illustration purposes in this article, we will focus on a specific LLaMa 2 series model &#x2013; the llama-2-70b model. This particular model has 70 billion parameters, all thoroughly trained on a substantial portion of text sourced from the internet.</p><p>The parameters themselves represent the weights of the Neural Network. In the case of the llama-2-70b model, an open-source model from Meta, each parameter is stored as 2 bytes, resulting in a parameters file size of 140 GB.</p><p><strong>OBTAINING<strong> THE PARAMETERS</strong> (Model Training)</strong></p><p>Running an LLM, specifically model inferencing, is a relatively undemanding task. The computational complexity comes into play when accessing the parameters, particularly during the model training phase.</p><p>Obtaining the parameters involves a process similar to condensing or compressing the internet. Textual segments from the internet, along with various document types, undergo compression, forming what can be conceptualised as a zip file of the internet. This can visually witnessed in the image below:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh7-us.googleusercontent.com/Njo5gTol3fLn_wrikDkE47l2AO_ZjH-VdRqCTYAgoAbQoQX1Ffdjs35AB_GlVJH1lOfyTa3cm_WQe-5cL5MQB4-hJygRcQTmote7GjrcUA-XtFluzdfbZB3-hC8KPeiEsKKACEysEdCOnwGRhVQxIzU" class="kg-image" alt="Intro To LLMs" loading="lazy"><figcaption>Training an LLm</figcaption></figure><p>However, it&apos;s essential to note that this process isn&apos;t a conventional compression; instead, it employs a lossy compression method, distinct from the typical lossless compression.</p><p>In contemporary model training, the above numbers numbers may seem modest, as they are understated by a factor of 10 or more. This assertion underscores the justification for the substantial financial investments&#x2014;amounting to tens and hundreds of dollars&#x2014;in the training of Language Models (LLMs). This training involves extensive clusters and datasets.</p><p>Once the parameters are obtained through the completion of the model training phase, executing the model (model inferencing) becomes a relatively computationally economical task.</p><p><strong>NEURAL NETWORKS</strong></p><p>Essentially, Language Models (LLMs) operate by predicting the next word in a sequence. Mathematically, it can be demonstrated that there exists a close correlation between prediction and compression. This is the reason why the process of training the Neural Network (NN) is often likened to compressing the internet. The NN becomes skilled at accurately predicting the next word by leveraging the insights derived from the compressed dataset.</p><p>Although the concept of predicting the next word may sound straightforward in the preceding passages, it is, in reality, a strong objective. This objective forces the Neural Net to acquire a wealth of knowledge about the world within the parameters of the Neural Network.</p><p>To illustrate, consider a random Wikipedia search:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://lh7-us.googleusercontent.com/zW2jimN0PslC3lD3P9O12lZPBreifg5pjhJkbNreXKgLvNIQSZnUW9wvH06596LVldZOMFwef85Oaq7QGZKXc0ui23bopItJmMWtFSefg6A6L5TqkabnGwebZn047K90US58srZMyl7rwTTu2eD_lBg" class="kg-image" alt="Intro To LLMs" loading="lazy"><figcaption>Chandler Bing</figcaption></figure><p>Now put yourself in the shoes of the Neural Net attempting to predict the next word, you&apos;d encounter words loaded with information, many of them highlighted in blue. Given the task of predicting the next word, the parameters must grasp a lot of knowledge such as the first and last name, birthplace of Chandler, his friendships, whether he is a fictional character, his residence, nationality, occupation, and more. Consequently, in the pursuit of next-word prediction, all the acquired knowledge is condensed or compressed and stored within the parameters.</p><p><strong>HOW DO WE ACTUALLY USE THE NEURAL NETS?</strong></p><p>When we execute a Neural Network or a Language model, what we receive is akin to a web page dream or hallucination. The neural network essentially &quot;dreams&quot; the content of internet documents.</p><p>The NN generates texts from the distribution it was trained on, essentially mimicking these documents. In essence, it engages in a form of hallucination based on the lossy compression applied during the model training phase.</p><p>Let&apos;s explore a couple of examples to illustrate this process. Consider the generation of ISBNs: the number produced may not exist in reality, but it is generated based on the Neural Net&apos;s understanding that after the word &quot;ISBN&quot; comes a number of a certain length and with specific digits. The NN then fabricates a number that aligns with these criteria.</p><p>Another example involves prompting the model to discuss a specific animal, like a koala bear. The Neural Net produces information about the koala bear based on the knowledge embedded in its parameters. This knowledge is acquired through lossy compression during the training phase. However, the model doesn&apos;t regurgitate information verbatim from a particular internet source it was trained on.</p><p>In the instances mentioned above, we witness the Language model or the Neural Net in its hallucinatory or dream-like state.</p><p><strong>HOW THE NEURAL NETWORK WORKS</strong></p><p>In this segment of the article, we delve into the mechanics of how the Neural Network or the Language Model accomplishes the task of predicting the next word. This is where things start to get a bit complex.</p><p>Here, we zoom in on the diagram of the Neural network, known as the Transformer neural network architecture. A comprehensive understanding involves delving into the mathematical operations and the various stages implicated in the next word prediction task.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.neurotech.africa/content/images/2024/01/image.png" class="kg-image" alt="Intro To LLMs" loading="lazy" width="709" height="720" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/01/image.png 600w, https://blog.neurotech.africa/content/images/2024/01/image.png 709w"><figcaption>Transformer Neural Networks &#x2013; 1 &#x2013; Genel Mimari ve Giri&#x15F; | codegenius</figcaption></figure><p>However, full comprehension is elusive, and what we do know in detail is limited:</p><ol><li>There are billions of parameters distributed throughout the neural network.</li><li>We know how to iteratively adjust and fine-tune these parameters to enhance predictive capabilities.</li><li>While we can measure the model&apos;s performance and probability, understanding how each of the billions of parameters contributes to this performance remains a challenge.</li></ol><p>The neural nets construct a knowledge database, but it remains somewhat perplexing, messy, and imperfect. An example illustrating this is the concept of what is known as &quot;reverse course.&quot;</p><p>For instance, if you ask ChatGPT about the mother of Tom Cruise, it might correctly respond with Mary Lee Pfeiffer. However, in the same chat, if you inquire about Mary Lee Pfeiffer&apos;s son, it might answer with Tom Cruise. </p><figure class="kg-card kg-image-card"><img src="https://blog.neurotech.africa/content/images/2024/01/Screenshot-2024-01-04-at-16.39.45.png" class="kg-image" alt="Intro To LLMs" loading="lazy" width="837" height="458" srcset="https://blog.neurotech.africa/content/images/size/w600/2024/01/Screenshot-2024-01-04-at-16.39.45.png 600w, https://blog.neurotech.africa/content/images/2024/01/Screenshot-2024-01-04-at-16.39.45.png 837w" sizes="(min-width: 720px) 720px"></figure><p>Interestingly, if you pose the question in a separate chat, asking specifically about Mary Lee Pfeiffer&apos;s son, you might receive a response indicating a lack of clarity.</p><figure class="kg-card kg-image-card"><img src="https://lh7-us.googleusercontent.com/2lfxRCXfhdaJ-z-kpejto8N6yJ8_EzNx5j9-Y2gtdXQKm6tCiCcBcnHx9GNPTsA0w2EOQVDJFOMOhEIr6LGd5akN36H8_Fs9JSg5tPs2Coom4Q0f_4wbHDJ584uKvETKvM5sqQZbwjpUiRvcFziEK3Y" class="kg-image" alt="Intro To LLMs" loading="lazy"></figure><p>This demonstrates that the knowledge base is somewhat messy and one-dimensional. It cannot be readily accessed in any direction, requiring a specific approach to access information&#x2014;a phenomenon known as Prompt Engineering.</p><p><strong>CONCLUSION</strong></p><p>In summary, Language Models (LLMs) should be viewed as largely inscrutable artefacts. They differ significantly from engineered structures like cars, where each part and its function are well-understood. LLMs, being Neural Networks resulting from a lengthy optimisation process, are not currently fully understood. While there is a field known as interpretability or mechanistic interpretability attempting to decipher the functions of various parts of Neural Networks, LLMs are presently treated as empirical artefacts. We can input data, measure output and behavior, and analyse responses in diverse scenarios, but the intricate workings remain a subject of ongoing exploration.</p>]]></content:encoded></item></channel></rss>