AWS Unveils Redshift RG Instances with Integrated Data Lake Engine to Combat Rising Analytics Costs
AWS has launched new Graviton-powered RG instances for its Amazon Redshift data warehouse, designed to slash analytics costs and simplify lakehouse architectures by unifying query engines for warehouse and S3 data.
The integrated engine allows SQL analytics to run natively across both Redshift and Amazon S3 data lakes, eliminating the need for separate Spectrum scans and reducing unpredictable billing spikes.
“Earlier, Amazon Redshift RA3 systems operated as two separate engines, with Redshift handling warehouse data and Spectrum handling S3 data lake queries. When a query required both, AWS had to coordinate between the two systems, which added complexity, slowed performance, and made Spectrum scan costs unpredictable,” said Pareekh Jain, principal analyst at Pareekh Consulting.
How the New RG Instances Change the Game
“The new RG instances combine those worlds into one integrated engine running directly inside Redshift itself. That means Iceberg, Parquet, and S3 lake data can now be queried natively alongside warehouse data with less movement, lower overhead, and better performance optimization while also eliminating separate Spectrum per-scan charges,” Jain added.

Separate Spectrum charges had become a growing pain point for enterprises as AI workloads drove higher query volumes and more machine-generated analytics, leading to sudden bill spikes.
Background: The Challenge of Dual Engines
Before RG instances, Redshift RA3 systems required separate engines for warehouse and S3 lake queries, forcing AWS to coordinate between them. This added complexity and slowed performance, especially for queries spanning both data stores.
The new instances are seen as AWS’s response to rivals like Databricks, Snowflake, Google Cloud with BigQuery, and Microsoft through Fabric, all pushing unified lakehouse platforms to reduce operational sprawl.

“RG instances do strengthen Amazon Redshift competitively, but mostly as a defensive move rather than a breakthrough disruption,” Jain said.
While Databricks leans on AI and data science, Snowflake on multi-cloud simplicity, Google Cloud on AI-native analytics via BigLake, and Microsoft on Fabric-Power BI-Copilot integration, AWS is betting on S3 scale and tighter Redshift optimization.
What This Means for Enterprises
According to Greyhound Research Chief Analyst Sanchit Vir Gogia, CIOs should focus on the “painful overlap” where Redshift, S3, open formats, BI, recurring analytics, cost pressure, and AI-assisted querying meet.
“The best fit is not every workload. The best fit is the painful overlap. That overlap is where Redshift, S3, open formats, BI, recurring analytics, cost pressure, and AI-assisted querying meet. That is where RG can materially reduce friction,” Gogia said.
“CIOs should inventory external schemas, recurring analytical workloads, and cost-sensitive queries. The RG instances offer a clear path, but only for the right workloads,” he added.
The new instances are available now for Amazon Redshift, with pricing based on compute capacity rather than per-scan charges, offering more predictable costs.
Related Articles
- Polars Shatters Pandas Performance: Data Workflow Runs in 0.2 Seconds, Down from 61
- 10 Critical Insights: How to Fix RAG Hallucinations with a Self-Healing Layer
- Understanding the Context Object: The Nervous System of AI Agents
- Navigating Electoral Uncertainty: A Q&A on Scenario Modelling for Local Elections
- Python's Steep Learning Curve: New Findings Highlight Persistent Development Challenges
- Pinecone Unveils Nexus Knowledge Engine, Signaling the End of RAG for Agentic AI
- Mapping the Unwritten: How Meta’s AI Agents Decoded Tribal Knowledge in Massive Data Pipelines
- Navigating Python's Hidden Challenges: From Packaging to New Language Features