How to Build AI Agents: A Step-by-Step Developer Guide
Building an AI agent is fundamentally different from building a standard REST API. This guide walks developers through the architecture, system prompting, and code required to deploy a fully autonomous agent using the Fikra API.
1. Prerequisites & Environment Setup
We will be building this agent using Python. You will need access to a reasoning-capable LLM that supports "Function Calling" (Tool Use). We will use the Fikra AI API, which natively supports tool routing.
pip install requests python-dotenv
Create a .env file in your directory and add your Lacesse API key:
LACESSE_API_KEY="fikra-your-key-here"
2. Defining the Agent's Persona (System Prompt)
The System Prompt is the core instruction set for your agent. It dictates how the agent should behave, what tone it should use, and crucially, how it should approach problems. To prevent hallucination, give strict boundaries.
system_prompt = """
You are a highly capable Logistics AI Agent.
Your goal is to monitor tracking numbers and assist users with delays.
RULES:
1. If a user asks for an order status, ALWAYS use the `check_order_status` tool.
2. Never guess or hallucinate a tracking number.
3. If the tool returns an error, inform the user politely and offer human escalation.
"""
3. Defining API Tools (JSON Schema)
This is where the agent connects to your backend. You must define the tools the agent has access to using JSON Schema. This tells the Fikra model exactly what parameters it needs to generate to execute a specific function.
tools = [
{
"type": "function",
"function": {
"name": "check_order_status",
"description": "Fetches the current real-time status of a logistics package.",
"parameters": {
"type": "object",
"properties": {
"tracking_id": {
"type": "string",
"description": "The 10-digit alphanumeric tracking number"
}
},
"required": ["tracking_id"]
}
}
}
]
For a deeper dive into how this JSON schema translates to real-world actions, read our guide on AI Agents Using APIs.
4. The Execution Loop (Code Example)
Here is a basic Python implementation of an agent loop. We send the user's query and our tool definitions to the API. If the LLM decides to use a tool, we execute the local function and send the result back to the LLM.
import requests
import json
import os
API_URL = "https://lacesse.co.ke/api/v1/chat/completions"
HEADERS = {"Authorization": f"Bearer {os.getenv('LACESSE_API_KEY')}"}
# Mock local function representing your database
def check_order_status(tracking_id):
if tracking_id == "ABC1234567":
return "Status: Delayed at Mombasa Port due to weather."
return "Status: Order not found."
def run_agent(user_input):
payload = {
"model": "fikra-ternary-v1",
"messages": [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_input}
],
"tools": tools
}
# Step 1: Send to LLM
response = requests.post(API_URL, headers=HEADERS, json=payload).json()
message = response['choices'][0]['message']
# Step 2: Check if LLM wants to call a tool
if 'tool_calls' in message:
tool_call = message['tool_calls'][0]
if tool_call['function']['name'] == 'check_order_status':
# Extract arguments generated by the AI
args = json.loads(tool_call['function']['arguments'])
# Execute local function
tool_result = check_order_status(args['tracking_id'])
# Step 3: Send the result BACK to the LLM to formulate a final answer
payload['messages'].append(message) # Append assistant's tool call
payload['messages'].append({
"role": "tool",
"tool_call_id": tool_call['id'],
"content": tool_result
})
final_response = requests.post(API_URL, headers=HEADERS, json=payload).json()
return final_response['choices'][0]['message']['content']
# If no tools were called, return standard text
return message['content']
# Test the Agent
print(run_agent("Where is my package? The tracking is ABC1234567."))
This is the raw logic. In production, orchestrating memory and multiple tools manually gets messy. We highly recommend using a framework like Fikra Claw to handle the routing and vector memory automatically.
5. Frequently Asked Questions
What programming languages are best for building AI agents?
Python is currently the industry standard due to its rich ecosystem of AI frameworks (like LangChain) and data science libraries. However, Node.js/TypeScript is rapidly growing in popularity, and the Lacesse Fikra API offers native SDKs for both.
How long does it take to build a basic AI agent?
Using a modern framework like Fikra Claw or LangChain, an experienced developer can build a basic AI agent connected to a database and an external API in under 4 hours.
Do I need a massive GPU to run a local AI agent?
No. You can build agents by making API calls to cloud models (like the Lacesse Fikra API), which requires zero local GPU compute. If you require absolute privacy, Fikra Ternary models are highly optimized to run on standard consumer CPUs or Lacesse EdgeCore hardware.