Skip to main content

Huggingface Endpoints

The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.

The Hugging Face Hub also offers various endpoints to build ML applications. This example showcases how to connect to the different Endpoints types.

In particular, text generation inference is powered by Text Generation Inference: a custom-built Rust, Python and gRPC server for blazing-faset text generation inference.

from langchain_huggingface import HuggingFaceEndpoint
API Reference:HuggingFaceEndpoint

Installation and Setupโ€‹

To use, you should have the huggingface_hub python package installed.

%pip install --upgrade --quiet huggingface_hub
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token

from getpass import getpass

HUGGINGFACEHUB_API_TOKEN = getpass()
import os

os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN

Prepare Examplesโ€‹

from langchain_huggingface import HuggingFaceEndpoint
API Reference:HuggingFaceEndpoint
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
API Reference:LLMChain | PromptTemplate
question = "Who won the FIFA World Cup in the year 1994? "

template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate.from_template(template)

Examplesโ€‹

Here is an example of how you can access HuggingFaceEndpoint integration of the free Serverless Endpoints API.

repo_id = "mistralai/Mistral-7B-Instruct-v0.2"

llm = HuggingFaceEndpoint(
repo_id=repo_id,
max_length=128,
temperature=0.5,
huggingfacehub_api_token=HUGGINGFACEHUB_API_TOKEN,
)
llm_chain = prompt | llm
print(llm_chain.invoke({"question": question}))

Dedicated Endpointโ€‹

The free serverless API lets you implement solutions and iterate in no time, but it may be rate limited for heavy use cases, since the loads are shared with other requests.

For enterprise workloads, the best is to use Inference Endpoints - Dedicated. This gives access to a fully managed infrastructure that offer more flexibility and speed. These resoucres come with continuous support and uptime guarantees, as well as options like AutoScaling

# Set the url to your Inference Endpoint below
your_endpoint_url = "https://fayjubiy2xqn36z0.us-east-1.aws.endpoints.huggingface.cloud"
llm = HuggingFaceEndpoint(
endpoint_url=f"{your_endpoint_url}",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
llm("What did foo say about bar?")

Streamingโ€‹

from langchain_core.callbacks import StreamingStdOutCallbackHandler
from langchain_huggingface import HuggingFaceEndpoint

llm = HuggingFaceEndpoint(
endpoint_url=f"{your_endpoint_url}",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
streaming=True,
)
llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])

This same HuggingFaceEndpoint class can be used with a local HuggingFace TGI instance serving the LLM. Check out the TGI repository for details on various hardware (GPU, TPU, Gaudi...) support.


Was this page helpful?