Adventures in LangChain's "Quick Start Tutorial" (Using Ollama)
- By Bruce Nielson
- ML & AI Specialist
Suppose you want to learn LangChain, so naturally you go to their quick start tutorial page. And here is what you find as their first suggested tutorial:
from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
# Run the agent
agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
Okay, but this requires you to run Claude. But doesn't Claude require an api key? Well, it doesn't say anything about that. Maybe there's a free Claude api tier that requires no key?
Could not resolve authentication method. Expected either api_key or auth_token to be set.
Oops, nope. You need a Claude api key. I guess they were assuming I'd just know that and already have it set in my environment? Seems like a weird assumption for a quick start tutorial, but okay, I guess?
No worries, I'll just go get a Claud api key.
But wait, doesn't Claude require me to pay for tokens?

Oh, whew! Claude has a free tier for testing! Good! Okay, not too bad. Let's try that quick start tutorial again.
Your credit balance is too low to access the Anthropic API. Please go to Plans & Billing to upgrade or purchase credits.
Arg! Okay, apparently Claude is lying and there is no free tier! Even though I took that verbiage directly off the API key page for Claude, I guess they meant I only had access to the web interface?
I tried asking the LangChain AI why this didn't work and it recommended I try out the OpenAI free tier instead and even helpfully gave me a rewrite to the 'quick start tutorial' that would do this.
Wait, doesn't OpenAI have no free tier? Didn't they sunset that like two seconds after they went live and ended up with so many users? But surely the LangChain AI knows what it's talking about, right?
You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/...
Nope, the LangChain AI doesn't have a clue.
Okay, no worries, I'll just rewrite this 'quick start tutorial' to use ollama. Here is what I try:
# pip pip install -U langchain langchain-ollama
# Requires Python 3.10+
# pip install -U langchain langchain-ollama
# Requires Ollama installed + "ollama pull llama3.2"
from langchain_ollama import ChatOllama # Changed from OpenAI
from langchain.agents import create_agent
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
# No secrets needed!
llm = ChatOllama(model="llama3.2:1b") # Free local model
agent = create_agent(
model=llm,
tools=[get_weather],
system_prompt="You are a helpful assistant.",
)
# Run agent
# Our message
message = {
"messages": [{"role": "user", "content": "what is the weather in sf"}]
}
result = agent.invoke(message)
# Print full result (shows all messages + tool calls)
print("Full result:", result)
# Print JUST the model's final response
print("Model response:", result["messages"][-1].content)
Surely, I'm finally ready to use this 'quick start tutorial', right?
Model response: There is no weather data for that city.
What!? Okay, I was using Llama 1b. That sad little model probably can't figure out how to use the tool calling in this 'quick start' tutorial.
Let's try Lama3.2:3b instead. I'll also improve the system prompt and get a lot more specific.
Current weather conditions in San Francisco are not available.
It must not be using the provided tool. Let me just set a breakpoint.
Nope, it's using the tool...
Let's just run it again, just to see what happens:
Model response: I made a mistake! The actual response from the tool is:
"Currently Sunny
Temperature: 62°F
Conditions: Clear
Wind: Light (5 mph)
Sky Conditions: Partly Cloudy"
So, to correct my previous response: The weather in SF is currently sunny with a temperature of 62°F.
Wow, it got the sunny right, finally, but it basically just made the rest of that answer up entirely. Sigh.
Well, there you go. Here's my final revised 'quick start' code to use LangChain with oLlama. Good luck! (You'll need it.)
# Requires Python 3.10+
# pip install -U langchain langchain-ollama
# Requires Ollama installed + "ollama pull llama3.2"
from langchain_ollama import ChatOllama # Changed from OpenAI
from langchain.agents import create_agent
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
# No secrets needed!
llm = ChatOllama(model="llama3.2:3b") # Free local model
agent = create_agent(
model=llm,
tools=[get_weather],
system_prompt="""
You are a weather assistant.
1. ALWAYS use get_weather tool for weather questions
2. REPORT EXACTLY what the tool returns - do not make up data
3. Tool result = actual weather data
4. Base your answer ONLY on tool output
""",
)
# Run agent
# Our message
message = {
"messages": [{"role": "user", "content": "what is the weather in sf"}]
}
result = agent.invoke(message)
# Print full result (shows all messages + tool calls)
print("Full result:", result)
# Print JUST the model's final response
print("Model response:", result["messages"][-1].content)
If you need help with your Artificial Intelligence solutions, we're here to help.