Skip to content

remove double escaped backslash newline #1377

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ SELECT pgml.transform(

## Quantization

_Discrete quantization is not a new idea. It's been used by both algorithms and artists for more than a hundred years._\\
_Discrete quantization is not a new idea. It's been used by both algorithms and artists for more than a hundred years._

Going beyond 16-bit down to 8 or 4 bits is possible, but not with hardware accelerated floating point operations. If we want hardware acceleration for smaller types, we'll need to use small integers w/ vectorized instruction sets. This is the process of _quantization_. Quantization can be applied to existing models trained with 32-bit floats, by converting the weights to smaller integer primitives that will still benefit from hardware accelerated instruction sets like Intel's [AVX](https://en.wikipedia.org/wiki/Advanced\_Vector\_Extensions). A simple way to quantize a model can be done by first finding the maximum and minimum values of the weights, then dividing the range of values into the number of buckets available in your integer type, 256 for 8-bit, 16 for 4-bit. This is called _post-training quantization_, and it's the simplest way to quantize a model.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This is not only a performance benefit, but also a usability improvement for cli

## Benchmark

\\


<figure><img src="../../.gitbook/assets/pgcat_prepared_throughput.svg" alt=""><figcaption></figcaption></figure>

Expand Down
6 changes: 3 additions & 3 deletions pgml-cms/docs/resources/benchmarks/mindsdb-vs-postgresml.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Another difference is that PostgresML also supports embedding models, and closel

The architectural implementations for these projects is significantly different. PostgresML takes a data centric approach with Postgres as the provider for both storage _and_ compute. To provide horizontal scalability for inference, the PostgresML team has also created [PgCat](https://github.com/postgresml/pgcat) to distribute workloads across many Postgres databases. On the other hand, MindsDB takes a service oriented approach that connects to various databases over the network.

\\


<figure><img src="../../.gitbook/assets/mindsdb-pgml-architecture.png" alt=""><figcaption></figcaption></figure>

Expand All @@ -59,7 +59,7 @@ The architectural implementations for these projects is significantly different.
| On Premise | ✅ | ✅ |
| Web UI | ✅ | ✅ |

\\


The difference in architecture leads to different tradeoffs and challenges. There are already hundreds of ways to get data into and out of a Postgres database, from just about every other service, language and platform that makes PostgresML highly compatible with other application workflows. On the other hand, the MindsDB Python service accepts connections from specifically supported clients like `psql` and provides a pseudo-SQL interface to the functionality. The service will parse incoming MindsDB commands that look similar to SQL (but are not), for tasks like configuring database connections, or doing actual machine learning. These commands typically have what looks like a sub-select, that will actually fetch data over the wire from configured databases for Machine Learning training and inference.

Expand Down Expand Up @@ -287,7 +287,7 @@ PostgresML is the clear winner in terms of performance. It seems to me that it c
| translation\_en\_to\_es | t5-base | 1573 | 1148 | 294 |
| summarization | sshleifer/distilbart-cnn-12-6 | 4289 | 3450 | 479 |

\\


There is a general trend, the larger and slower the model is, the more work is spent inside libtorch, the less the performance of the rest matters, but for interactive models and use cases there is a significant difference. We've tried to cover the most generous use case we could between these two. If we were to compare XGBoost or other classical algorithms, that can have sub millisecond prediction times in PostgresML, the 20ms Python service overhead of MindsDB just to parse the incoming query would be hundreds of times slower.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ For comparison, it would cost about $299 to use OpenAI's cheapest embedding mode
| GPU | 17ms | $72 | 6 hours |
| OpenAI | 300ms | $299 | millennia |

\\


You can also find embedding models that outperform OpenAI's `text-embedding-ada-002` model across many different tests on the [leaderboard](https://huggingface.co/spaces/mteb/leaderboard). It's always best to do your own benchmarking with your data, models, and hardware to find the best fit for your use case.

Expand Down