Demo: Create AI Agent From Scratch : LangChain + Ollama Diet Generator App

Platform Setup & Discovery

Step 1: Sign up on JCurve.Tech

  • Action: Visit https://jcurve.tech.

  • Navigation: Click AI Agent Hackathon in the top right (see Image 1, follow the arrow).

  • Result: You’ll land on the hackathon page. Review the details to understand the initiative.

  • Next: Click Signup in the top right (see Image 2) to create an account.

Image 1

Image 2


Step 2: Verify your email address and create a password

  • Action: On the signup screen, enter your email address (use the same email as on your CV/Resume) and click Verify Email.

  • Email Check: Look for a verification email in your inbox.

  • Verification: Click the link in the email to verify your email address.

  • Password Setup: After verification, create a password to complete the signup process (see Images 3 and 4).

Image 3

Image 4


Step 3: Sign in and select your dream job

Action: Log in using your credentials after password confirmation.

Navigation: You’ll see the “Select Your Dream Job” page (see Image 5).

Selections:

  • Choose your status (e.g., Student) and enter your student ID.

  • Select your Dream Company, job family/category, and skills/technologies (e.g., Front End Engineer II, IN Payments; see Image 6).

Image 5:

Image 6:


Step 4: Practical Skills and Hackathon Overview

Action: After selecting your dream job, you’ll land on the Roadmap page for that job.

Navigation: Click Practical Training - AI Hackathon 2025 to go to the Hackathon homepage.

Additional tip: Click on upload resume to check the skills that you have against the dream role that you have chosen. This will help you identify the missing skills and you can upskill yourself and become a desirable candidate for your employer.


Exploration:

  • Review the hackathon’s 4-week structure, learning tracks, and project details.

  • Understand that you’ll build production-ready AI agents, gaining real-world experience valued by employers.

  • Take your time browsing through the hackathon content. You'll see the 4-week structure, learning tracks, and exactly what you'll build. This isn't just theory – you'll create production-ready AI agents.

Image 7:

Image 8:

Image 9:


Steps 5-7: Hackathon Registration & Setup

Step 5: Complete Registration Form

  • Action: Click the Register button on the hackathon page.

  • Details: Fill in your personal details, select your preferred learning track, and share your motivation for joining.

    • Please note that Github email should be of same email account that you have on JCurve platform. Github username should be from the same Github email account used.

  • Result: Your registration is submitted, initiating your participation.

Image 10:

Step 6: Email Confirmation from JCurve/GitHub

  • Action: Check your inbox (and spam folder) for a GitHub invitation from JCurve.

  • Purpose: The invitation provides access to your private GitHub repository, where your code and projects will be stored.

Image 11:

Step 7: Accept GitHub Invitation

  • Action: Log in to GitHub and accept the invitation.

  • Result: Gain access to starter code, templates, and your personal development workspace in the private repository.

Image 12:

Image 13:

Your private GitHub repository is full set, and you are all set to start the AI Agent building journey.


Overview of this demo building guide

This is a demo guide on how to go about building an AI agent for the hackathon using LangChain, Ollama and Streamlit. This is a free approach to creating your first AI Agent!

The platform discovery, hackathon registration and submission process stay the same as per aforementioned guide.

Here is the overview of the document:


Step 1: Understand the Project Structure and Objective

Objective We’re building a simple diet recommendation app using:

  • LangChain to structure LLM interactions

  • Ollama as a local LLM backend because it is open source (using LLaMA 3.2)

  • Streamlit for the user interface

The app takes basic user inputs — age, gender, body composition, activity level — and returns a personalized diet recommendation.

Project Structure diet-app/ ├── app.py # Streamlit frontend UI ├── generate_diet.py # Core logic with LangChain + Ollama └── Pipfile / requirements.txt # Dependency tracking (optional)


🗂️ 1. Create the Project Folder

🎯 Goal of This Step

We’re going to set up the project directory and file structure, then begin writing the core function that will eventually generate diet recommendations.

This step does not handle any AI logic yet — it focuses entirely on naming, structure, clarity, and modularity. In your terminal or file explorer, create a new folder for your project:

mkdir diet_generator_guide
cd diet_generator_guide

📁 2. Inside This Folder, Create the Following Files

touch app.py
touch generate_diet.py

· generate_diet.py will contain all the logic for generating a diet plan

· app.py will hold the Streamlit frontend code


💡 Why split into two files? This modularity follows the Single Responsibility Principle — each file has one clear purpose, which improves readability and reusability.


📦 Dependencies Make sure you are installing the dependencies after being in the project directory. Install required packages:

// pip install langchain langchain-community langchain-ollama streamlit 

To install using pipenv:

// pipenv install langchain langchain-community langchain-ollama streamlit 
Image 1.3 - Downloading dependencies using pipenv

🔍 What Each Library Does

To fully understand what we're installing and why, here's a breakdown of each core dependency:


🧠 langchain

📦 langchain is a framework that helps structure prompts, chains, memory, and tools when building applications with large language models.

· We're using it to create modular chains — connecting prompts and models in a clean, readable pipeline.

· Example: prompt | llm is a LangChain idiom.

📚 LangChain Introduction →


🧩 langchain-community

📦 This package includes community-maintained integrations like PromptTemplate, Runnable, and IO utilities.

· Often used implicitly when working with core components of LangChain.

· Future-proof: it decouples community tools from LangChain’s evolving core.


🤖 langchain-ollama

📦 This is the official LangChain integration for Ollama — a local model runner.

· Allows you to use ChatOllama() as if you were using OpenAI or HuggingFace APIs, but without the cloud.

· Connects to the Ollama server on your machine (usually at localhost:11434).

📚 LangChain ↔ Ollama Docs →


🎛️ streamlit

📦 Streamlit is a lightweight Python web framework for turning scripts into shareable web apps.

· We’ll use it to build a frontend interface where users select gender, age, body type, etc.

· One-liner UIs like st.selectbox() and st.write() let you build apps without HTML/CSS/JS.

📚 Streamlit PyPI page →


🦙 2. Install Ollama

To run LLaMA locally, you’ll need to install and configure Ollama on your system:

  1. Go to ollama.com Create a free account and download the installer for your operating system (macOS, Windows, or Linux).

  2. Install Ollama Follow the platform-specific installation instructions provided on the site.

  3. Run LlaMA 3.2 locally After installation, open your terminal and run:

     ollama run llama3.2

This will automatically download the llama3.2 model and start a local server that LangChain will connect to.

Image ___ - OLLAMA landing page
Test Run with Ollama's llama3.2

✍️ Step 2. Open generate_diet.py and Start Defining Your Core Function

generate_diet. py

def generate_diet(age, gender, body_comp, activity_level):
'''
    Generate a personalized diet recommendation based on user input.
    
    Parameters:
    - age (int): The user's age in years
    - gender (str): The user's gender (e.g., 'Male', 'Female')
    - body_comp (str): Body composition category (e.g., 'Lean', 'Overweight')
    - activity_level (str): Physical activity level (e.g., 'Sedentary', 'Athlete')
    
    Returns:
    - A string with the final diet recommendation (to be implemented later)
'''
    pass
    # This function will use AI logic in the next steps

🧠 Why This Structure?

Element

Purpose

Best Practice

generate_diet(…)

Core reusable function

Clear, descriptive name using snake_case

Parameters (age, gender, etc.)

Inputs from Streamlit UI

Explicit arguments improve readability

Docstring

Explains what the function does

Keeps your code self-documenting

pass

Placeholder for now

Use it when defining empty functions


📌 Note on Data Types

While Python is dynamically typed, always think in terms of what types each parameter should be:

· ageint (whole number)

· gender, body_comp, activity_levelstr (predefined choices via UI)

You can later enforce these with optional type hints, like:

def generate_diet(age: int, gender: str, body_comp: str, activity_level: str):

📝 Note on Docstrings: Importance & Role in Agentic Context

Docstrings are more than just comments—they’re embedded documentation that:

  • 📚 Self-document your function’s purpose, parameters, and output

  • Enable IDEs, linters, and automated tooling (like AI docstring generators) to provide inline help and clarity.

  • Establish uniform structure across your codebase, improving readability and maintainability.

🛠️ In Agentic AI workflows

When building AI agents (like LangChain agents), clear docstrings allow downstream tools to:

  • Introspect the capabilities and expected behavior of functions

  • Make informed decisions when composing multi-step pipelines or choosing tools

  • Potentially use automated docstring generation to maintain documentation as agents evolve

✅ Step 3: Designing the Prompt Template for Personalized Diet Recommendations


🎯 Goal of This Step

To define a clean, structured, and context-rich prompt using LangChain's PromptTemplate. This prompt will later be combined with the LLM to generate personalized output based on user inputs.


🧩 Why Use a Prompt Template?

When working with LLMs, prompt engineering is as important as the model itself.

· LangChain's PromptTemplate lets us define prompts with placeholders that are dynamically filled with user input.

· It helps maintain a consistent and reusable prompt structure.

· It separates prompt logic from application logic — making your app easier to test, modify, and expand.


📥 Import the PromptTemplate Class

At the top of your generate_diet.py, add:

from langchain_core.prompts import PromptTemplate

🧠 This import gives access to the core PromptTemplate class from LangChain — it lives in langchain_core, not langchain directly.


🧾 Define the Prompt Template

template = """
	You are a nutrition expert tasked with recommending the most 
	relevant aspects of a person's diet based on their core 
	attributes. Given the following details about a person:
	
	Age: {age}
	Sex/Gender: {gender}
	Body Composition: {body_comp}
	Activity Level: {activity_level}
	
	Output a list of key diet aspects this person should focus on. Cover:
	
	Macronutrient balance
	Caloric needs
	Timing/frequency of meals
	Food types to prioritize or avoid
	Supplement needs (if applicable)
	Any special recommendations based on their conditions/preferences
	
	Keep the recommendations concise, practical, and personalized to the 
	attributes given. Avoid generic advice.
	    """

🧠 Prompt Design Rationale

Line

Why It's There

"You are a nutrition expert..."

Sets the model’s persona and role clearly — this increases relevance and tone accuracy

"Given the following details..."

Explicitly introduces structured input — helps LLM focus

"Age: {age}" etc.

These curly braces are placeholders that LangChain will fill at runtime

"Output a list..."

Specifies what kind of output is needed — list format

"Cover:..."

Bullet-style enumeration of expected areas — improves LLM completeness

"Keep it concise..."

Tells the LLM to avoid vague or generic output — this improves personalization

🧠 LLMs follow tone and structure cues. Including bullet-style sections and instructions improves consistency in responses.


📦 Create the PromptTemplate Instance

prompt = PromptTemplate(
    template=template,
    input_variables=["age", "gender", "body_comp", "activity_level"],
)
  • template=: The full text you just defined, with {} placeholders

  • input_variables=: A list of all placeholders used inside the template — must match exactly, or LangChain will throw an error

Best Practice: Always explicitly list your input variables. Avoid dynamic or inferred variables unless absolutely necessary.


✅ Summary

You’ve now:

  • Created a role-aligned, structured prompt for the LLM

  • Defined all the variables the prompt depends on

  • Followed LangChain best practices for separating prompt logic from execution

🦜 Step 4: Building the LangChain Pipeline with LCEL and LLM Initialization


🎯 Goal of This Step

To initialize the LLM, connect it with the prompt using LangChain's LCEL syntax, and invoke the full pipeline to get real, personalized AI output.


⚙️ 1. Initialize the LLM (Using Ollama)

Add this to your generate_diet.py:

from langchain_ollama import ChatOllama

The below llm variable declaration should be inside the generate_diet function.

llm = ChatOllama(model="llama3.2")

· ChatOllama is a LangChain interface for local LLMs served by Ollama

· "llama3.2" is the model identifier (must match the one running in your Ollama server)


Why use ChatOllama?

· No API keys required

· Fully offline and privacy-friendly

· Faster iteration for local development


🔁 2. Swapping the LLM (e.g. OpenAI Instead)

Want to switch to OpenAI later? It’s as easy as changing the import and initialization:

 from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-40-mini")

· Same downstream code (prompt | llm, .invoke(), etc.) stays the same.

· This abstraction is a core strength of LangChain.

⚠️ Remember: Using cloud LLMs like OpenAI requires setting your API key, usually via .env or environment variable.


🧩 3. Connect Prompt and LLM Using LCEL

 diet_chain = prompt | llm 

This is LangChain’s Expression Language (LCEL) — a compositional syntax to build chains.

📘 Reference: LangChain LCEL Docs →

Concept

Meaning

prompt

A LangChain PromptTemplate object

llm

A LangChain ChatOllama (or ChatOpenAI) object

`prompt

llm`

This is the equivalent of writing:

formatted_prompt = prompt.format(age=..., gender=..., ...)
llm_output = llm.invoke(formatted_prompt)

…but cleaner, more declarative, and composable with other LangChain tools like memory, retrievers, tools, or output parsers.


🚀 4. Invoke the Chain

res = diet_chain.invoke(
		input={
		"age": age,
		"gender": gender,
		"body_comp": body_comp,
		"activity_level": activity_level,
	}
)
return res

· .invoke() is how you execute the LCEL pipeline

· You pass a dictionary with all required input_variables for the prompt

· The result, res, is an LLM-generated message object

🔍 Access the text with res.content when returning or displaying output.


📦 Full Code (Together)

from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama

def generate_diet(age, gender, body_comp, activity_level):
	# docstring
	"""
	Generate a personalized diet recommendation based on user input.

    	Parameters:
	    - age (int): The user's age in years
	    - gender (str): The user's gender (e.g., 'Male', 'Female')
	    - body_comp (str): Body composition category (e.g., 'Lean', 'Overweight')
	    - activity_level (str): Physical activity level (e.g., 'Sedentary', 'Athlete')
	
	    Returns:
	    - A string with the final diet recommendation (to be implemented later)
	"""
	# template to feed into PromptTemplate
	template = """
	You are a nutrition expert tasked with recommending the most relevant aspects of a person's diet based on their core 
	attributes. Given the following details about a person:
	
	Age: {age}
	Sex/Gender: {gender}
	Body Composition: {body_comp}
	Activity Level: {activity_level}
	
	Output a list of key diet aspects this person should focus on. Cover:
	
	Macronutrient balance
	Caloric needs
	Timing/frequency of meals
	Food types to prioritize or avoid
	Supplement needs (if applicable)
	Any special recommendations based on their conditions/preferences
	
	Keep the recommendations concise, practical, and personalized to the attributes given. Avoid generic advice.
	    """
	# initializing PromptTemplate
	prompt = PromptTemplate(
		template=template,
		input_variables=["age", "gender", "body_comp", "activity_level"],
	)
	
	# initializing the llm (could be replaced with openai or other llms)
	llm = ChatOllama(model="llama3.2")

	# chaining prompt into the llm
	diet_chain = prompt | llm

	# invoking chain with relevant inputs
	res = diet_chain.invoke(
		input={
			"age": age,
			"gender": gender,
			"body_comp": body_comp,
			"activity_level": activity_level,
		}
	)
	return res

🧪 Optional: Quick Test the Function with __main__

Before integrating into Streamlit, you can quickly check that your logic works by calling the function directly:

 if __name__ == "__main__":
	print("Generating Diet...")
	result = generate_diet(19, 'Male', '25% Fat', '5 times a weak')
	print(result.content)

🔍 Explanation

· if __name__ == "__main__": is a Python idiom to test code only when the script is run directly, not when it’s imported.

· generate_diet(...) is called with sample data to simulate a user input.

· This prints the output from the LLM so you can verify:

o The pipeline works

o The prompt fills correctly

o You’re getting real, structured output

✅ This is a great way to debug LangChain workflows before adding UI layers like Streamlit.

⚠️ Be sure to run this in the same environment where Ollama is running locally.


✅ Summary

· You’ve now created a full prompt → LLM → output pipeline using LCEL

· It’s modular, expressive, and swappable (OpenAI, Anthropic, etc.)

· This chain is now ready to be called from any frontend — next up: Streamlit UI


✅ Step 5: Creating the Streamlit UI (app.py)


🎯 Goal of This Step

To build a clean, interactive frontend using Streamlit that lets users input personal data and trigger the diet generation function.


📥 1. Import Required Modules

Open your app.py file and start with:

import streamlit as st
from generate_diet import generate_diet

✅ Explanation:

· import streamlit as st: Imports the full Streamlit module and aliases it as st, a convention in all Streamlit apps.

· from generate_diet import generate_diet: Brings in your backend function from generate_diet.py.

📚 Streamlit docs → Intro to Writing Apps


🖼️ 2. Add Page Title and Create Input Widgets

· st.title() renders a large top-level title at the top of the page 📚 st.title() documentation →

· st.selectbox() creates a dropdown where users can choose one option from a list

· The first argument is the label; the second is the list of options 📚 st.selectbox() documentation →

· st.number_input() creates a box for numeric input

· min_value and max_value constrain the user input 📚 st.number_input() →

st.title('Diet Generator')
gender = st.selectbox('What is your Gender?', ['Male', 'Female'])
age = st.number_input('What is your Age?', min_value=1, max_value=120)
body_comp = st.selectbox("What is your Body Composition?", ["Lean", "Average", "Overweight", "Obese"])
activity = st.selectbox("What is your Activity Level?", ["Sedentary", "Light", "Moderate", "Active", "Athlete"])

🔘 3. Add a Button to Trigger Diet Generation

· st.button() creates a clickable button

· Returns True only in the frame where it was clicked — useful for conditionally running logic 📚 st.button() →

generate_button = st.button('Generate')

✅ Summary So Far

Your Streamlit UI now includes:

· A clear title

· Input fields for age, gender, body type, and activity level

· A button to trigger backend logic

In the next step, we’ll wire this to actually call generate_diet() and display the results beautifully using st.write().


App.py code so far:

import streamlit as st
from generate_diet import generate_diet


st.title('Diet Generator')
gender = st.selectbox('What is your Gender?', ['Male', 'Female'])
age = st.number_input('What is your Age?', min_value=1, max_value=120)
body_comp = st.selectbox("What is your Body Composition?", ["Lean", "Average", "Overweight", "Obese"])
activity = st.selectbox("What is your Activity Level?", ["Sedentary", "Light", "Moderate", "Active", "Athlete"])
generate_button = st.button('Generate')

✅ Step 6: Connect the UI to Logic and Display the Output


🎯 Goal of This Step

To wire up the Streamlit form to our backend function and show the generated diet recommendation when the user clicks the button.


🧩 Full Implementation

Add the following to the bottom of your app.py file:

if generate_button:
	print('Generating Diet...')
	st.write('Generating Diet...')
	diet = generate_diet(age=age, body_comp=body_comp, activity_level=activity, gender=gender)
	st.write(diet.content)

🔍 Explanation

if generate_button:

· This block only runs when the button is clicked.

· It prevents the function from running automatically every time the app refreshes.

📚 Streamlit button interaction docs →


🔎 print("Generating Diet...")

· This prints to the terminal or Streamlit CLI — useful during development to confirm logic is being triggered.

🧾 st.write("Generating Diet...")

· This shows a message in the browser UI so the user knows something is happening.

💡 For a smoother user experience, you could later replace this with a spinner (st.spinner) or loading animation.

📚 Streamlit spinner interaction docs →


🤖 diet = generate_diet(...)

· This calls the core function you built in generate_diet.py

· Passes the user inputs collected from the Streamlit UI

· Internally, this triggers the LangChain + Ollama pipeline and returns an object with .content as the LLM’s output


🧾 st.write(diet.content)

· Displays the actual recommendation generated by the LLM

· diet is the full return object, and .content contains the plain text result 📚 st.write() →

st.write() is Streamlit’s most versatile display method — works with strings, markdown, JSON, etc.


App.py Code:

import streamlit as st
from generate_diet import generate_diet


st.title('Diet Generator')
gender = st.selectbox('What is your Gender?', ['Male', 'Female'])
age = st.number_input('What is your Age?', min_value=1, max_value=120)
body_comp = st.selectbox("What is your Body Composition?", ["Lean", "Average", "Overweight", "Obese"])
activity = st.selectbox("What is your Activity Level?", ["Sedentary", "Light", "Moderate", "Active", "Athlete"])
generate_button = st.button('Generate')

if generate_button:
	print('Generating Diet...')
	st.write('Generating Diet...')
	diet = generate_diet(age=age, body_comp=body_comp, activity_level=activity, gender=gender)
	st.write(diet.content)

UI:


✅ Final Summary

Your app is now:

· Fully functional end-to-end

· Structured into clean frontend/backend separation

· Built with best practices in prompt design, modular code, and user experience


✅ Optional Step 7: Next Steps and Full Code Review


🚀 Next Steps: Ideas to Polish and Expand

Here are some practical enhancements you can make:


🎨 1. Improve Output Presentation

· Use st.markdown() instead of st.write() to render formatted lists or sections

· Wrap each category (e.g., Macronutrient, Caloric Needs) in bullet points or bold headings

st.markdown("### Your Personalized Diet Plan")
st.markdown(diet.content)

📚 st.markdown docs →


🔄 2. Add Loading Feedback

Let users know the app is working using a spinner:

with st.spinner("Generating your diet plan..."):
    diet = generate_diet(...)

📚 st.spinner docs →


❌ 3. Add Basic Error Handling

Catch common issues such as empty responses or input errors:

try:
    diet = generate_diet(...)
    st.markdown(diet.content)
except Exception as e:
    st.error("Something Went wrong. Please try again.")
    st.text(str(e))

📤 4. Export Output (as Text, PDF, etc.)

Let users download the result as .txt:

st.download_button("Download Plan", data=diet.content, file_name="diet_plan.txt")

📚 st.download_button docs →


🧠 5. Add User Preferences

Allow users to specify dietary restrictions (e.g., vegetarian, allergies) and pass that to the prompt template for even more personalization./


🧾 Full Code Review (Reference Section)

generate_diet.py

from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama

def generate_diet(age, gender, body_comp, activity_level):
	"""
	Generate a personalized diet recommendation based on user input.

    Parameters:
    - age (int): The user's age in years
    - gender (str): The user's gender (e.g., 'Male', 'Female')
    - body_comp (str): Body composition category (e.g., 'Lean', 'Overweight')
    - activity_level (str): Physical activity level (e.g., 'Sedentary', 'Athlete')

    Returns:
    - A string with the final diet recommendation (to be implemented later)
    """

	template = """
	You are a nutrition expert tasked with recommending the most relevant aspects of a person's diet based on their core 
	attributes. Given the following details about a person:
	
	Age: {age}
	Sex/Gender: {gender}
	Body Composition: {body_comp}
	Activity Level: {activity_level}
	
	Output a list of key diet aspects this person should focus on. Cover:
	
	Macronutrient balance
	Caloric needs
	Timing/frequency of meals
	Food types to prioritize or avoid
	Supplement needs (if applicable)
	Any special recommendations based on their conditions/preferences
	
	Keep the recommendations concise, practical, and personalized to the attributes given. Avoid generic advice.
	    """

	prompt = PromptTemplate(
		template=template,
		input_variables=["age", "gender", "body_comp", "activity_level"],
	)
	llm = ChatOllama(model="llama3.2")

	diet_chain = prompt | llm

	res = diet_chain.invoke(
		input={
			"age": age,
			"gender": gender,
			"body_comp": body_comp,
			"activity_level": activity_level,
		}
	)
	return res


if __name__ == "__main__":
	print("Generating Diet...")
	result = generate_diet(19, 'Male', '25% Fat', '5 times a weak')
	print(result.content)

app.py

import streamlit as st
from generate_diet import generate_diet


st.title('Diet Generator')
gender = st.selectbox('What is your Gender?', ['Male', 'Female'])
age = st.number_input('What is your Age?', min_value=1, max_value=120)
body_comp = st.selectbox("What is your Body Composition?", ["Lean", "Average", "Overweight", "Obese"])
activity = st.selectbox("What is your Activity Level?", ["Sedentary", "Light", "Moderate", "Active", "Athlete"])
generate_button = st.button('Generate')

if generate_button:
	print('Generating Diet...')
	st.write('Generating Diet...')
	diet = generate_diet(age=age, body_comp=body_comp, activity_level=activity, gender=gender)
	st.write(diet.content)

Last updated