Skip to content

Initial_audit_changes #1436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions pgml-cms/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@ description: The key concepts that make up PostgresML.

# Overview

PostgresML is a complete MLOps platform built on PostgreSQL. Our operating principle is:
PostgresML is a complete [MLOps platform](## "A Machine Learning Operations platform is a set of practices that streamlines bringing machine learning models to production") built on PostgreSQL. Our operating principle is:

> _Move models to the database, rather than constantly moving data to the models._

Data for ML & AI systems is inherently larger and more dynamic than the models. It's more efficient, manageable and reliable to move models to the database, rather than continuously moving data to the models.

We offer both [managed-cloud](/docs/product/cloud-database/) and [local](/docs/resources/developer-docs/installation) installations to provide solutions for wherever you keep your data.

## AI engine

PostgresML allows you to take advantage of the fundamental relationship between data and models, by extending the database with the following capabilities:
Expand Down Expand Up @@ -48,8 +50,8 @@ Some of the use cases include:

## Our mission

PostgresML strives to provide access to open source AI for everyone. We are continuously developping PostgresML to keep up with the rapidly evolving use cases for ML & AI, but we remain committed to never breaking user facing APIs. We welcome contributions to our [open source code and documentation](https://github.com/postgresml) from the community.
PostgresML strives to provide access to open source AI for everyone. We are continuously developing PostgresML to keep up with the rapidly evolving use cases for ML & AI, but we remain committed to never breaking user-facing APIs. We welcome contributions to our [open source code and documentation](https://github.com/postgresml target="_blank") from the community.

## Managed cloud

While our extension and pooler are open source, we also offer a managed cloud database service for production deployments of PostgresML. You can [sign up](https://postgresml.org/signup) for an account and get a free Serverless database in seconds.
While our extension and pooler are open source, we also offer a managed cloud database service for production deployments of PostgresML. You can [sign up](https://postgresml.org/signup target="_blank") for an account and get a free Serverless database in seconds.
12 changes: 6 additions & 6 deletions pgml-cms/docs/api/apis.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@ description: Overview of the PostgresML SQL API and SDK.

# API overview

PostgresML is a PostgreSQL extension which adds SQL functions to the database where it's installed. The functions work with modern machine learning algorithms and latest open source LLMs while maintaining a stable API signature. They can be used by any application that connects to the database.
PostgresML is a PostgreSQL extension which adds SQL functions to the database where it is installed. The functions work with modern machine learning algorithms and latest open source LLMs while maintaining a stable API signature. They can be used by any application that connects to the database.

In addition to the SQL API, we built and maintain a client SDK for JavaScript, Python and Rust. The SDK uses the same extension functionality to implement common ML & AI use cases, like retrieval-augmented generation (RAG), chatbots, and semantic & hybrid search engines.
In addition to the SQL API, we built and maintain a client SDK for JavaScript, Python, and Rust. The SDK uses the same extension functionality to implement common ML & AI use cases, like retrieval-augmented generation (RAG), chatbots, and semantic & hybrid search engines.

Using the SDK is optional, and you can implement the same functionality with standard SQL queries. If you feel more comfortable using a programming language, the SDK can help you to get started quickly.

## [SQL extension](sql-extension/)

The PostgreSQL extension provides all of the ML & AI functionality, like training models and inference, via SQL functions. The functions are designed for ML practitioners to use dozens of ML algorithms to train models, and run real time inference, on live application data. Additionally, the extension provides access to the latest Hugging Face transformers for a wide range of NLP tasks.
The PostgreSQL extension provides all of the ML & AI functionality, like training models and inference, via SQL functions. The functions are designed for ML practitioners to use dozens of ML algorithms to train models and run real time inference on live application data. Additionally, the extension provides access to the latest Hugging Face transformers for a wide range of NLP tasks.

### Functions

Expand All @@ -21,18 +21,18 @@ The following functions are implemented and maintained by the PostgresML extensi
| Function | Description |
|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [pgml.embed()](sql-extension/pgml.embed) | Generate embeddings inside the database using open source embedding models from Hugging Face. |
| [pgml.transform()](sql-extension/pgml.transform/) | Download and run latest Hugging Face transformer models, like Llama, Mixtral, and many more to perform various NLP tasks like text generation, summarization, sentiment analysis and more. |
| [pgml.transform()](sql-extension/pgml.transform/) | Download and run the latest Hugging Face transformer models, like Llama, Mixtral, and many more to perform various NLP tasks like text generation, summarization, sentiment analysis, and more. |
| pgml.transform_stream() | Streaming version of [pgml.transform()](sql-extension/pgml.transform/). Retrieve tokens as they are generated by the LLM, decreasing time to first token. |
| [pgml.train()](sql-extension/pgml.train/) | Train a machine learning model on data from a Postgres table or view. Supports XGBoost, LightGBM, Catboost and all Scikit-learn algorithms. |
| [pgml.deploy()](sql-extension/pgml.deploy) | Deploy a version of the model created with pgml.train(). |
| [pgml.predict()](sql-extension/pgml.predict/) | Perform real time inference using a model trained with pgml.train() on live application data. |
| [pgml.tune()](sql-extension/pgml.tune) | Run LoRA fine tuning on an open source model from Hugging Face using data from a Postgres table or view. |

Together with standard database functionality provided by PostgreSQL, these functions allow to create and manage the entire life cycle of a machine learning application.
Together with standard database functionality provided by PostgreSQL, these functions allow you to create and manage the entire life cycle of a machine-learning application.

## [Client SDK](client-sdk/)

The client SDK implements best practices and common use cases, using the PostgresML SQL functions and standard PostgreSQL features to do it. The SDK core is written in Rust, which manages creating and running queries, connection pooling, and error handling.
The client SDK implements best practices and common use cases using the PostgresML SQL functions and standard PostgreSQL features. The SDK core is written in Rust, which manages creating and running queries, connection pooling, and error handling.

For each additional language we support (currently JavaScript and Python), we create and publish language-native bindings. This architecture ensures all programming languages we support have identical APIs and similar performance when interacting with PostgresML.

Expand Down
6 changes: 3 additions & 3 deletions pgml-cms/docs/api/sql-extension/pgml.deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ SELECT * FROM pgml.deploy(

### Rolling Back

In case the new model isn't performing well in production, it's easy to rollback to the previous version. A rollback creates a new deployment for the old model. Multiple rollbacks in a row will oscillate between the two most recently deployed models, making rollbacks a safe and reversible operation.
If the new model is not performing well in production, it is easy to rollback to the previous version. A rollback creates a new deployment for the old model. Multiple rollbacks in a row will oscillate between the two most recently deployed models, making rollbacks a safe and reversible operation.

#### Rollback

Expand All @@ -101,7 +101,7 @@ SELECT * FROM pgml.deploy(
#### Output

```sql
project | strategy | algorithm
project | strategy | algorithm
------------------------------------+----------+-----------
Handwritten Digit Image Classifier | rollback | linear
(1 row)
Expand Down Expand Up @@ -129,7 +129,7 @@ SELECT * FROM pgml.deploy(

### Specific Model IDs

In the case you need to deploy an exact model that is not the `most_recent` or `best_score`, you may deploy a model by id. Model id's can be found in the `pgml.models` table.
In the case you need to deploy an exact model that is not the `most_recent` or `best_score`, you may deploy a model by id. Model ids can be found in the `pgml.models` table.

#### SQL

Expand Down
4 changes: 2 additions & 2 deletions pgml-cms/docs/api/sql-extension/pgml.embed.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: >-

# pgml.embed()

The `pgml.embed()` function generates [embeddings](/docs/use-cases/embeddings/) from text, using in-database models downloaded from Hugging Face. Thousands of [open-source models](https://huggingface.co/models?library=sentence-transformers) are available and new and better ones are being published regularly.
The `pgml.embed()` function generates [embeddings](/docs/use-cases/embeddings/) from text, using in-database models downloaded from Hugging Face. Thousands of [open-source models](https://huggingface.co/models?library=sentence-transformers) are available, with new and better models being published regularly.

## API

Expand Down Expand Up @@ -69,7 +69,7 @@ VALUES
{% endtab %}
{% endtabs %}

In this example, we're using [generated columns](https://www.postgresql.org/docs/current/ddl-generated-columns.html) to automatically create an embedding of the `quote` column every time the column value is updated.
In this example, we are using [generated columns](https://www.postgresql.org/docs/current/ddl-generated-columns.html) to automatically create an embedding of the `quote` column every time the column value is updated.

#### Using embeddings in queries

Expand Down
4 changes: 2 additions & 2 deletions pgml-cms/docs/api/sql-extension/pgml.train/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ pgml.train(
| `task` | `'regression'` | The objective of the experiment: `regression`, `classification` or `cluster` |
| `relation_name` | `'public.search_logs'` | The Postgres table or view where the training data is stored or defined. |
| `y_column_name` | `'clicked'` | The name of the label (aka "target" or "unknown") column in the training table. |
| `algorithm` | `'xgboost'` | <p>The algorithm to train on the dataset, see the task specific pages for available algorithms:<br><a data-mention href="regression.md">regression.md</a></p><p><a data-mention href="classification.md">classification.md</a><br><a data-mention href="clustering.md">clustering.md</a></p> |
| `algorithm` | `'xgboost'` | <p>The algorithm to train on the dataset, see the task specific pages for available algorithms:<br>[regression.md](regression.md "mention")</p><p>[classification.md](classification.md "mention")<br>[clustering.md](clustering.md "mention")</p> |
| `hyperparams` | `{ "n_estimators": 25 }` | The hyperparameters to pass to the algorithm for training, JSON formatted. |
| `search` | `grid` | If set, PostgresML will perform a hyperparameter search to find the best hyperparameters for the algorithm. See [hyperparameter-search.md](hyperparameter-search.md "mention") for details. |
| `search_params` | `{ "n_estimators": [5, 10, 25, 100] }` | Search parameters used in the hyperparameter search, using the scikit-learn notation, JSON formatted. |
Expand All @@ -63,7 +63,7 @@ This will create a "My Classification Project", copy the `pgml.digits` table int

When used for the first time in a project, `pgml.train()` function requires the `task` parameter, which can be either `regression` or `classification`. The task determines the relevant metrics and analysis performed on the data. All models trained within the project will refer to those metrics and analysis for benchmarking and deployment.

The first time it's called, the function will also require a `relation_name` and `y_column_name`. The two arguments will be used to create the first snapshot of training and test data. By default, 25% of the data (specified by the `test_size` parameter) will be randomly sampled to measure the performance of the model after the `algorithm` has been trained on the 75% of the data.
The first time it is called, the function will also require a `relation_name` and `y_column_name`. The two arguments will be used to create the first snapshot of training and test data. By default, 25% of the data (specified by the `test_size` parameter) will be randomly sampled to measure the performance of the model after the `algorithm` has been trained on the 75% of the data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we prefer not using contractions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggested it as a way to make it simpler for non-native English speakers to read.

Another reason is for translation, but I figured you probably have no plans for that at this point.


!!! tip

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Text classification is a task which includes sentiment analysis, natural languag

### Sentiment analysis

Sentiment analysis is a type of natural language processing technique which analyzes a piece of text to determine the sentiment or emotion expressed within. It can be used to classify a text as positive, negative, or neutral.
Sentiment analysis is a type of natural language processing (NLP) technique which analyzes a piece of text to determine the sentiment or emotion expressed within. It can be used to classify a text as positive, negative, or neutral.

#### Example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ _Result_

### Model from hub

To use a specific model from :hugging: model hub, pass the model name along with task name in task.
To use a specific model from the Hugging Face model hub, pass the model name along with task name in task.

```sql
SELECT pgml.transform(
Expand Down Expand Up @@ -109,7 +109,7 @@ _Result_

### Beam Search

Text generation typically utilizes a greedy search algorithm that selects the word with the highest probability as the next word in the sequence. However, an alternative method called beam search can be used, which aims to minimize the possibility of overlooking hidden high probability word combinations. Beam search achieves this by retaining the num\_beams most likely hypotheses at each step and ultimately selecting the hypothesis with the highest overall probability. We set `num_beams > 1` and `early_stopping=True` so that generation is finished when all beam hypotheses reached the EOS token.
Text generation typically utilizes a greedy search algorithm that selects the word with the highest probability as the next word in the sequence. However, an alternative method called beam search can be used, which aims to minimize the possibility of overlooking hidden high probability word combinations. Beam search achieves this by retaining the `num\_beams` most likely hypotheses at each step and ultimately selecting the hypothesis with the highest overall probability. We set `num_beams > 1` and `early_stopping=True` so that generation is finished when all beam hypotheses reached the EOS token.

```sql
SELECT pgml.transform(
Expand All @@ -135,14 +135,16 @@ _Result_
]]
```

Sampling methods involve selecting the next word or sequence of words at random from the set of possible candidates, weighted by their probabilities according to the language model. This can result in more diverse and creative text, as well as avoiding repetitive patterns. In its most basic form, sampling means randomly picking the next word $w\_t$ according to its conditional probability distribution: $$w_t \approx P(w_t|w_{1:t-1})$$
Sampling methods involve selecting the next word or sequence of words at random from the set of possible candidates, weighted by their probabilities according to the language model. This can result in more diverse and creative text, as well as avoiding repetitive patterns. In its most basic form, sampling means randomly picking the next word `$w\_t$` according to its conditional probability distribution: `$$w_t \approx P(w_t|w_{1:t-1})$$`.

However, the randomness of the sampling method can also result in less coherent or inconsistent text, depending on the quality of the model and the chosen sampling parameters such as temperature, top-k, or top-p. Therefore, choosing an appropriate sampling method and parameters is crucial for achieving the desired balance between creativity and coherence in generated text.
However, the randomness of the sampling method can also result in less coherent or inconsistent text, depending on the quality of the model and the chosen sampling parameters such as `temperature`, `top-k`, or `top-p`. Therefore, choosing an appropriate sampling method and parameters is crucial for achieving the desired balance between creativity and coherence in generated text.

You can pass `do_sample = True` in the arguments to use sampling methods. It is recommended to alter `temperature` or `top_p` but not both.

### _Temperature_

The `temperature` parameter can fine tune the level of confidence, diversity, and randommness of a model. It uses a range from 0 (very conservative output) to infinity (very diverse output) to define how the model should select a certain output based on the output's certainty. Higher temperatures should be used when the certainty of the output is low, and lower temperatures should be used when the certainty is very high. A `temperature` of 1 is considered a medium setting.

```sql
SELECT pgml.transform(
task => '{
Expand All @@ -167,6 +169,8 @@ _Result_

### _Top p_

Top_p is a technique used to improve the performance of generative models. It selects the tokens that are in the top percentage of probability distribution, allowing for more diverse responses. If you are experiencing repetitive responses, modifying this setting can improve the quality of the output. The value of `top_p` is a number between 0 and 1 that sets the probability distribution, so a setting of `.8` sets the probability distribution at 80 percent. This means that it selects tokens that have an 80% probability of being accurate or higher.

```sql
SELECT pgml.transform(
task => '{
Expand Down
2 changes: 1 addition & 1 deletion pgml-cms/docs/api/sql-extension/pgml.tune.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ translation

\===

This HuggingFace dataset stores the data as language key pairs in a JSON document. To use it with PostgresML, we'll need to provide a `VIEW` that structures the data into more primitively typed columns.
This Hugging Face dataset stores the data as language key pairs in a JSON document. To use it with PostgresML, we'll need to provide a `VIEW` that structures the data into more primitively typed columns.

\=== "SQL"

Expand Down
Loading