Skip to content

Editors pass over the blog #278

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 25, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 19 additions & 12 deletions pgml-docs/docs/blog/data-is-living-and-relational.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,28 +37,35 @@ A common problem with data science and machine learning tutorials is the publish

</center>

- It’s usually denormalized into a single tabular form, e.g. csv file
- It’s often relatively tiny to medium amounts of data, not big data
- It’s always static, new rows are never added
- It’s sometimes been pre-treated to clean or simplify the data
They are:

As Data Science transitions from academia into industry, those norms influence organizations and applications. Professional Data Scientists now need teams of Data Engineers to move the data from production databases into centralized data warehouses and denormalized schemas that are more familiar, and ideally easier to work with. Large offline batch jobs are a typical integration point between Data Scientists and their Engineering counterparts who deal with online systems. As the systems grow more complex, additional specialized Machine Learning Engineers are required to optimize performance and scalability bottlenecks between databases, warehouses, models and applications.
- usually denormalized into a single tabular form, e.g. a CSV file,
- often relatively tiny to medium amounts of data, not big data,
- always static, with new rows never added,
- and sometimes pre-treated to clean or simplify the data.

This eventually leads to expensive maintenance and then to terminal complexity where new improvements to the system become exponentially more difficult. Ultimately, previously working models start getting replaced by simpler solutions, so the business can continue to iterate. This is not a new phenomenon, see the fate of the Netflix Prize.
As Data Science transitions from academia into industry, these norms influence organizations and applications. Professional Data Scientists need teams of Data Engineers to move data from production databases into data warehouses and denormalized schemas which are more familiar, and ideally easier to work with. Large offline batch jobs are a typical integration point between Data Scientists and their Engineering counterparts, who primarily deal with online systems. As the systems grow more complex, additional specialized Machine Learning Engineers are required to optimize performance and scalability bottlenecks between databases, warehouses, models and applications.

This eventually leads to expensive maintenance and to terminal complexity: new improvements to the system become exponentially more difficult. Ultimately, previously working models start getting replaced by simpler solutions, so the business can continue to iterate. This is not a new phenomenon, see the fate of the Netflix Prize.

Announcing the PostgresML Gym 🎉
-------------------------------

Instead of starting from the academic perspective that data is dead, PostgresML embraces the living and dynamic nature of data inside modern organizations. It's relational and growing in multiple dimensions.
Instead of starting from the academic perspective that data is dead, PostgresML embraces the living and dynamic nature of data produced by modern organizations. It's relational and growing in multiple dimensions.

![relational data](/images/illustrations/uml.png)

- Schemas are normalized for real time performance and correctness considerations
- New rows are constantly added and updated, which form the incomplete features for a prediction
- Denormalized datasets may grow to billions of rows, and terabytes of data
- The data often spans multiple iterations of the schema, and software bugs can introduce outlier data
Relationa data:

- is normalized for real time performance and correctness considerations,
- and has new rows added and updated constantly, which form the incomplete features for a prediction.

Meanwhile, denormalized data sets:

- may grow to billions of rows, and terabytes of data,
- and often span multiple iterations of the schema, with software bugs introducing outliers.

We think it’s worth attempting to move the machine learning process and modern data architectures beyond the status quo. To that end, we’re building the PostgresML Gym to provide a test bed for real world ML experimentation in a Postgres database. Your personal gym will include the PostgresML dashboard and several tutorial notebooks to get you started.
We think it’s worth attempting to move the machine learning process and modern data architectures beyond the status quo. To that end, we’re building the PostgresML Gym, a free offering, to provide a test bed for real world ML experimentation in a Postgres database. Your personal Gym will include the PostgresML dashboard, several tutorial notebooks to get you started, and access to your own personal PostgreSQL database, supercharged with our machine learning extension.

<center>
<video autoplay loop muted width="90%" style="box-shadow: 0 0 8px #000;">
Expand Down