Skip to content

Commit b4703d9

Browse files
author
Montana Low
committed
search draft
1 parent 233e8ea commit b4703d9

File tree

4 files changed

+118
-0
lines changed

4 files changed

+118
-0
lines changed
125 KB
Loading
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
<h1>Postgres Full Text Search is <del>Good Enough</del> the Best!</h1>
2+
3+
<p class="author">
4+
<img width="54px" height="54px" src="/images/team/montana.jpg" />
5+
Montana Low<br/>
6+
August 25, 2022
7+
</p>
8+
9+
Normalized data is a powerful tool leveraged by 10x engineering organizations. If you haven't read [Postgres Full Text Search is Good Enough!](http://rachbelaid.com/postgres-full-text-search-is-good-enough/) you should, unless you're willing to take that statement at face value, without the code samples to prove it. We'll go beyond that original claim in this post, but to reiterate the main points, Postgres supports:
10+
11+
- Stemming
12+
- Ranking / Boost
13+
- Support Multiple languages
14+
- Fuzzy search for misspelling
15+
- Accent support
16+
17+
This is good enough for most of the use cases out there, without introducing any additional concerns to your application. But, if you've ever tried to deliver relevant search results at scale, you'll realize that you need a lot more than these fundamentals. ElasticSearch has all kinds of best in class features, like a modified version of BM25 that is state of the art (developed in the 1970's), which is one of the many features you need beyond the Term Frequency (TF) based ranking that Postgres uses... but, _the ElasticSearch approach is a dead end_ for 2 reasons:
18+
19+
1. Trying to improve search relevance with statistics like TF-IDF and BM25 is like trying to make a flying car. What you want is a helicopter instead.
20+
2. Computing inverse document frequency for BM25 brutalizes your search indexing performance, which leads to a [host of follow on issues via distributed computation](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing), for the originally dubious reason.
21+
22+
<figure markdown>
23+
<center markdown>
24+
![Flying Car](/blog/images/delorean.jpg)
25+
</center>
26+
<figcaption>What we were promised</figcaption>
27+
</figure>
28+
29+
Academics have spent a decades inventing many algorithms that use orders of magnitude more compute eking out marginally better results that often aren't worth it in practice. Not to generally disparage academia, their work has consistently improved our world, but we need to pay attention to tradeoffs.
30+
31+
If you actually want to meaningfully improve search results, you generally need to add new data sources. Relevance is much more often revealed by the way other things **_relate_** to the document, rather than the content of the document itself. Google proved the point 23 years ago. Pagerank doesn't rely on the page content itself as much as it uses metadata from _links to the pages_. We live in a connected world and it's the interplay among things that reveal their relevance, whether that is links for websites, sales for products, shares for social posts... It's the greater context around the document that matters.
32+
33+
> _If you want to improve your search results, don't rely on expensive O(n*m) word frequency statistics. Get new sources of data instead. It's the relational nature of relevance that underpins why a relational database forms the ideal search engine._
34+
35+
Postgres made the right call to avoid the costs required to compute Inverse Document Frequency in their search indexing, given its meager benefit. Instead, it offers the most feature complete relational data platform. [Elasticsearch will tell you](https://www.elastic.co/guide/en/elasticsearch/reference/current/joining-queries.html) you can't join data in a **_naively_** distributed system at read time, because it is prohibitively expensive. Instead you'll have to join the data eagerly at indexing time, which is even more prohibitively expensive. That's good for their business since you're the one paying for it, and it will scale until it you're bankrupt.
36+
37+
What you really should do, is leave the data normalized inside Postgres, which will allow you to join additional, related data at query time. It will take multiple orders of magnitude less compute to index and search a normalized corpus, meaning you'll have a lot longer (potentially forever) before you need to distribute your workload, and then maybe you can do that intelligently instead of naively. Instead of spending your time building and maintaining pipelines to shuffle updates between systems, you can work on new sources of data to really improve relevance.
38+
39+
With PostgresML, you can now skip straight to full on machine learning when you have the related data. You can load your feature store into the same database as your search corpus. Each feature can live in it's own independent table, with it's own update cadence, rather than having to reindex and denormalize entire documents back to ElasticSearch, or worse, large portions of the entire corpus, when a single thing changes.
40+
41+
With a single SQL query, you can do multiple passes of re-ranking, pruning and personalization to refine a search relevance score.
42+
43+
- basic term relevance
44+
- embedding similarities
45+
- XGBoost or LightGBM inference
46+
47+
These queries can execute in miliseconds on large production sized corpora with Postgres's multiple indexing strategies. You can do all of this without adding any new infrastructure to your stack.
48+
49+
The following full blown example is for demonstration purposes only. You may want to try the PostgresML Gym to work up to the full understanding.
50+
51+
<center markdown>
52+
[Try the PostgresML Gym](https://gym.postgresml.org/){ .md-button .md-button--primary }
53+
</center>
54+
55+
```sql title="search.sql" linenums="1"
56+
WITH query AS (
57+
-- construct a query context with data that would typically be
58+
-- passed in from the application layer
59+
SELECT
60+
tsquery('my | search | terms') AS keywords,
61+
123456 AS user_id
62+
),
63+
first_pass AS (
64+
SELECT *,
65+
-- calculate the term frequency of keywords in the document
66+
ts_rank(documents.full_text, keywords) AS term_frequency
67+
-- our basic corpus is stored in the documents table
68+
FROM documents
69+
-- that match the query keywords defined above
70+
WHERE documents.full_text @@ query.keywords
71+
-- ranked by term frequency
72+
ORDER BY term_frequency DESC
73+
-- prune to a reasonably large candidate population
74+
LIMIT 10000
75+
),
76+
second_pass AS (
77+
SELECT *,
78+
-- create a second pass score of cosine_similarity across embeddings
79+
pgml.cosine_similarity(document_embeddings.vector, user_embeddings.vector) AS similarity_score
80+
FROM first_pass
81+
-- grab more data from outside the documents
82+
JOIN document_embeddings ON document_embeddings.document_id = documents.id
83+
JOIN user_embeddings ON user_embeddings.user_id = query.user_id
84+
ORDER BY similarity_score DESC
85+
-- further prune results to top performers for more expensive ranking
86+
LIMIT 1000
87+
),
88+
third_pass AS (
89+
SELECT *,
90+
-- create a final score using xgboost
91+
pgml.predict('search relevance model', ARRAY[session_level_features.*]) AS final_score
92+
FROM second_pass
93+
JOIN session_level_features ON session_level_features.user_id = query.user_id
94+
)
95+
SELECT *
96+
FROM third_pass
97+
ORDER BY final_score DESC
98+
LIMIT 100;
99+
```
100+
101+
If you'd like to play through an interactive notebook to generate models for search relevance in a Postgres database, try it in the Gym. An exercise for the curious reader, would be to combine all three scores above into a single algebraic function for ranking, and then into a fourth learned model...
102+
103+
<center>
104+
<video controls autoplay loop muted width="90%" style="box-shadow: 0 0 8px #000;">
105+
<source src="https://static.postgresml.org/postgresml-org-static/gym_demo.webm" type="video/webm">
106+
<source src="https://static.postgresml.org/postgresml-org-static/gym_demo.mp4" type="video/mp4">
107+
<img src="/images/demos/gym_demo.png" alt="PostgresML in practice" loading="lazy">
108+
</video>
109+
</center>
110+
111+
<center markdown>
112+
[Try the PostgresML Gym](https://gym.postgresml.org/){ .md-button .md-button--primary }
113+
</center>
114+
115+
Many thanks and ❤️ to all those who are supporting this endeavor. We’d love to hear feedback from the broader ML and Engineering community about applications and other real world scenarios to help prioritize our work.

pgml-docs/mkdocs.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ extra:
6464
copyright: Copyright &copy; 2022 PostgresML Team
6565

6666
plugins:
67+
- glightbox
6768
- search
6869
- include-markdown
6970
- minify:
@@ -141,6 +142,7 @@ nav:
141142
- Developer Overview: developer_guide/overview.md
142143
- Blog:
143144
- Data is Living and Relational: blog/data-is-living-and-relational.md
145+
- Postgres Full Text Search is Awesome: blog/postgres-full-text-search-is-awesome.md
144146
- About:
145147
- Team: about/team.md
146148
- Roadmap: about/roadmap.md

pgml-docs/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,5 @@ mkdocs
22
mkdocs-material
33
mkdocs-minify-plugin
44
mkdocs-include-markdown-plugin
5+
mkdocs-glightbox
56
pymdown-extensions

0 commit comments

Comments
 (0)