# FastAPI

### Learn

* Read [“FastAPI for Flask Users”](https://amitness.com/2020/06/fastapi-vs-flask/) (10 minutes)&#x20;
* Watch [“Build a machine learning API from scratch”](https://youtu.be/1zMQBe0l1bM) by FastAPI’s creator.  Live coding starts at 4:55, ends at 50:20. (25 minutes at 2x speed)&#x20;

### Example: Data visualization

Labs projects can use [Plotly](https://plotly.com/python/), a popular visualization library for both Python & JavaScript.

Follow the [getting started](https://bloomtechlabs.gitbook.io/data-science/master#getting-started) instructions.

Edit `app/main.py` to add your API `title` and `description`.

```python
app = FastAPI(
    title='World Metrics DS API',
    description='Visualize world metrics from Gapminder data',
    docs_url='/'
)
```

Prototype your visualization in a notebook.

```python
import plotly.express as px

dataframe = px.data.gapminder().rename(columns={
    'year': 'Year', 
    'lifeExp': 'Life Expectancy', 
    'pop': 'Population', 
    'gdpPercap': 'GDP Per Capita'
})

country = 'United States'
metric = 'Population'
subset = dataframe[dataframe.country == country]
fig = px.line(subset, x='Year', y=metric, title=f'{metric} in {country}')
fig.show()
```

Define a function for your visualization. End with `return fig.to_json()`

Then edit `app/viz.py` to add your code.

```python
import plotly.express as px

dataframe = px.data.gapminder().rename(columns={
    'year': 'Year', 
    'lifeExp': 'Life Expectancy', 
    'pop': 'Population', 
    'gdpPercap': 'GDP Per Capita'
})


@router.get('/worldviz')
async def worldviz(metric, country):
    """
    Visualize world metrics from Gapminder data

    ### Query Parameters
    - `metric`: 'Life Expectancy', 'Population', or 'GDP Per Capita'
    - `country`: [country name](https://www.gapminder.org/data/geo/), case sensitive

    ### Response
    JSON string to render with react-plotly.js
    """
    subset = dataframe[dataframe.country == country]
    fig = px.line(subset, x='Year', y=metric, title=f'{metric} in {country}')
    return fig.to_json()
```

Install `plotly` and test locally.

Your web teammates will re-use the [data viz code & docs in our `labs-spa-starter` repo](https://github.com/Lambda-School-Labs/labs-spa-starter/tree/main/src/components/pages/ExampleDataViz). The web app will call the DS API to get the data, then use `react-plotly.js` to render the visualization.

Plotly Python docs

* [Example gallery](https://plotly.com/python/)
* [Setting Graph Size](https://plotly.com/python/setting-graph-size/)
* [Styling Plotly Express Figures](https://plotly.com/python/styling-plotly-express/)
* [Text and font styling](https://plotly.com/python/v3/font/)
* [Theming and templates](https://plotly.com/python/templates/)

Plotly JavaScript docs

* BloomTech[ `labs-spa-starter` data viz code & docs](https://github.com/BloomTech-Labs/labs-spa-starter/tree/main/src/components/pages/ExampleDataViz)
* [Example gallery](https://plotly.com/javascript/)
* [Fundamentals](https://plotly.com/javascript/plotly-fundamentals/)
* [react-plotly.js](https://plotly.com/javascript/react/)

### Example: Machine learning

Follow the [getting started](https://bloomtechlabs.gitbook.io/data-science/master#getting-started) instructions.

Edit `app/main.py` to add your API `title` and `description`.

```python
app = FastAPI(
    title='House Price DS API',
    description='Predict house prices in California',
    docs_url='/'
)
```

Edit `app/ml.py` to add a predict function that returns a naive baseline.

```python
@router.post('/predict')
async def predict(item: Item):
    """Predict house prices in California."""
    y_pred = 200000
    return {'predicted_price': y_pred}
```

In a notebook, explore your data. Make an educated guess of what features you'll use.

```python
import pandas as pd
from sklearn.datasets import fetch_california_housing

# Load data
california = fetch_california_housing()
print(california.DESCR)
X = pd.DataFrame(california.data, columns=california.feature_names)
y = california.target

# Rename columns
X.columns = X.columns.str.lower()
X = X.rename(columns={'avebedrms': 'bedrooms', 'averooms': 'total_rooms', 'houseage': 'house_age'})

# Explore descriptive stats
X.describe()
```

```python
# Use these 3 features
features = ['bedrooms', 'total_rooms', 'house_age']
```

Add a class in `app/ml.py` to use your features.

```python
import pandas as pd
from pydantic import BaseModel


class House(BaseModel):
    """Data model to parse the request body JSON."""
    bedrooms: int
    total_rooms: float
    house_age: float

    def to_df(self):
        """Convert pydantic object to pandas dataframe with 1 row."""
        return pd.DataFrame([vars(self)])


@router.post('/predict')
async def predict(house: House):
    """Predict house prices in California."""
    X_new = house.to_df()
    y_pred = 200000
    return {'predicted_price': y_pred}
```

Install `pandas` if you haven't already.

Test locally. Now your web teammates can make POST requests to your API endpoint.

In a notebook, train your pipeline and pickle it. See these docs:

* [Scikit-learn docs - Model persistence](https://scikit-learn.org/stable/modules/model_persistence.html)

Get version numbers for every package you used in your pipeline. Install the exact versions of these packages in your virtual environment.

Edit `app/ml.py` to deserialize your model and use it in your predict function.

Now you are ready to re-deploy! 🚀
