Commercial enterprise offerings

Announcing chatlas: A package for interacting with Large Language Models in Python

Written by Posit Team
2025-02-25
A dark blue hexagon with rounded corners containing a white silhouette of Atlas kneeling and holding two white chat bubbles above him. Below the hexagon is the word "CHATLAS" in white text. The background is a lighter blue with a pattern of white lines and dots resembling a circuit board.

Get our email updates

Interested in learning more about Posit + AI tools? Join our email list.

We are thrilled to announce the release of chatlas, a Python package designed to simplify interacting with large language models (LLM) in Python.

Install chatlas from PyPI:

pip install -U chatlas

LLMs offer an opportunity to transform the way we work for the better. To help understand where, why, and how, it helps immensely to build software that utilizes them. Unfortunately, this can be challenging – LLM APIs tend to be designed for AI experts, putting functionality over simplicity. chatlas aims to make LLMs more accessible (especially to data scientists) by making common tasks easy (e.g., streaming output, tool calling, data extraction, etc.). As an added bonus, chatlas has an identical interface to the R package ellmer, making it easy to translate your LLM projects between R and Python.

If you’re new to programming with LLMs, we recommend the “Get Started” article for some useful background information and motivating examples.

Choosing your LLM provider

chatlas supports a wide range of model providers for both individual and enterprise needs. For personal projects, you can connect to providers like OpenAI (using ChatOpenAI()), Anthropic (using ChatAnthropic()), or Google Gemini (using ChatGoogle()). We provide recommendations to help choose the ideal model for your specific projects.

For organizations with specific security or compliance requirements, chatlas integrates with major cloud providers such as Azure (ChatAzureOpenAI()), AWS Bedrock (ChatBedrockAnthropic()), Vertex (ChatVertex()), or Snowflake (ChatSnowflake()). If there’s something your organization requires support for a provider that is not currently included, please let us know!

Connecting to a provider

Each provider has a unique set of prerequisites, often including an API key and additional Python dependencies. You can find details about these prerequisites in the function reference for the relevant provider. For example, to connect with ChatAnthropic() you’ll want to:

  • Install provider-specific dependencies: pip install "chatlas[anthropic]"
  • Sign up for a developer account
  • Provide your API key by setting anANTHROPIC_API_KEYenvironment variable.
    • We recommend storing your API key(s) securely in a .env file and loading them via dotenv.

Once set up, start chatting!

from chatlas import ChatAnthropic

chat = ChatAnthropic()
chat.chat("Who is Posit?")

Models and prompts

The key to building great software with Generative AI lies in picking the right model and designing an effective prompt. In addition to helping you choose a model, chatlas also has a great article on designing effective prompts. In a nutshell, prompts give the LLM instructions and/or additional context on how to respond to the user’s input.

from chatlas import ChatAnthropic

chat = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    system_prompt="You are a helpful coding assistant."
)

Interacting with LLMs: Chatting, Tool Calling, and More

There are various ways to chat via chatlas:

  • Interactive chat console: Use chat.console()or chat.app() for a real-time chat experience within your Python console or browser. This is a great way to explore and experiment with different prompts.
  • .chat() method: Interact with the LLM programmatically with responses streamed into a rich console.

  • .stream() method: Stream responses somewhere other than the Python console, for example, a Shiny or Streamlit chatbot.

  • Tool/function calling. Extend LLM capabilities by integrating external Python functions. For example, you can create a function to get the current temperature and then register it with the chat object using register_tool(). The LLM can then use this tool to answer questions requiring real-time information access.

  • Structured data extraction. Pass a pydantic model to the .extract_data() method to extract structured data from free form text and images.
  • Chat export. Export entire conversations in Markdown or HTML format using the .export() method.
  • Typing support. Benefit from autocompletion in your editor with chatlas’s comprehensive typing support.
  • Troubleshooting. Diagnose problems with the echo="all" option.
  • Monitoring in production. Since chatlas builds on top of official Python SDKs from OpenAI, Anthropic, etc., monitoring solutions that integrate with their logging mechanisms can be used to monitor and debug your chatbot in production.
  • Retrieval-Augmented Generation (RAG). Improve LLM responses with richer context by incorporating relevant documents retrieved based on user queries.

What’s next for chatlas?

We’re actively developing chatlas and have exciting plans for the future.

  • Similar to ellmer, we’d like to make authentication as seamless as possible, especially when running in Posit Workbench and Connect.
  • Support for registering tools made available by a Model-Context Protocol (MCP) Server.
  • Support for more LLM providers.

We encourage you to explore chatlas and share your feedback!