Skip to main content
Access TabPFN directly from Azure AI Foundry with Azure-native endpoints and authentication. Usage is billed through your Azure subscription and you are charged by Azure only for the compute resources needed to host TabPFN models.

Prerequisites

  • An active Azure subscription with access to Azure AI Foundry
  • Azure quota for VM SKUs with GPU
  • TabPFN deployed as a MaaP (Model-as-a-Platform) endpoint in your Foundry project
For a full list of supported VM SKUs please visit the TabPFN Microsoft Foundry Model Card.

Getting Started

  1. Navigate to the Azure AI Foundry Model Catalog
  2. Search for TabPFN and select TabPFN-2.5
  3. Click Use this model and follow the guided setup
  4. Once deployed, note your endpoint URL and API key from the deployment details page
Microsoft Foundry hosts each TabPFN version as a separate model. When a new TabPFN version is released, it will appear as a distinct model in the catalog and must be deployed independently - existing deployments will not be updated automatically.

Installation

A TabPFN model set up using Microsoft Foundry can be accessed using any HTTP client.
pip install requests numpy pandas

Usage Guide

TabPFN on Azure Foundry exposes a single POST /predict HTTP endpoint. You send training data, labels, and test data in one request and get predictions back immediately - without any model training.

Endpoint

Authenticate using the API key from your deployment’s Keys and Endpoint page in Azure ML Studio.
POST https://<your-endpoint>.<region>.inference.ml.azure.com/predict
Content-Type: application/json
Authorization: Bearer <your-api-key>

Request

X_train
number[][] | object
required
Training features. Accepts a row-oriented 2D array [[f1, f2], [f1, f2], ...].
y_train
number[]
required
Training labels or targets. One value per training row.
X_test
number[][] | object
required
Test features to predict for. Same format as X_train, without the target.
task_config
object
Controls the models behavior.

Examples

Get a probability distribution over classes for each test row.
import requests

response = requests.post(
    "https://<your-endpoint>.<region>.inference.ml.azure.com/predict",
    headers={
        "Content-Type": "application/json",
        "Authorization": "Bearer <your-api-key>",
    },
    json={
        "task_config": {
            "task": "classification",
            "predict_params": {"output_type": "probas"},
        },
        "X_train": [[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
        "y_train": [0, 1, 0],
        "X_test": [[2.0, 3.0], [4.0, 5.0]],
    },
)

result = response.json()
print(result["prediction"])
# [[0.12, 0.88], [0.55, 0.45]]
prediction is a 2D array — one inner list per test row, one probability per class.

Output types

Classification

TabPFN natively outputs class probabilities, giving you calibrated uncertainty estimates from a single model with no extra configuration.
Output typeShapeDescription
probas (default)number[][]One probability list per test row
predsnumber[]Predicted class label per test row

Regression

TabPFN models can provide full predictive distribution rather than just point estimates, so you can extract quantiles or summary statistics with a single inference call.
Output typeShapeDescription
mean (default)number[]Predicted mean per test row
mediannumber[]Predicted median per test row
modenumber[]Predicted mode per test row
quantilesnumber[][]One list per quantile
fullobjectAll outputs (mean, median, quantiles, etc.)
mainobjectMain outputs only

Errors

CodeCause
400Missing required fields or invalid JSON
415Content-Type is not application/json
422Validation error — e.g. y_train has multiple columns, invalid output_type