Tibby Xu (TX): With the typical data warehouse scenario, if one thing goes wrong, the
whole system comes down to its knees.
Eric Kavanagh (EK): That’s right.
TX: There’s a huge dependence on the IT side before any other business can get done.
Whereas if you flip the situation on its head and focus more on the self-service, you get the users to respond
and react quickly in tune to their own needs and then worry about the performance of those queries after the
fact with appropriate technologies like column-based databases, temporary caches, even data grids for
processing things in parallel, something like a Map/Reduce technique.
EK: Yeah, so let’s walk through this. What would the ideal scenario look like? So for
example instead of having your traditional array of OLAP cubes that you have built on top of a data warehouse
for example, and maybe even sometimes bringing in additional data through data federation or something like
that, would you have essentially instead of that, you would have an array of data marts that would be
dimension-specific, or how would that work?
#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index |
|
Read More |
Bring in the Other Atomic Data Sources
TX: Sure, that’s an option. But you can also even bring in the other atomic data
sources whatever they may be. A lot of people with service-oriented architectures have services providing
various data feeds. Of course you have transactional operational databases and of course data marts,
warehouses, and the like. Even external feeds from vendors can become one of those sources. So essentially
providing just the individual sources to the user in a user-friendly way and really allowing them to drag and
drop, combine, and manipulate data sources together on the fly to get exactly the results that they want, I
think that is really the idea.
EK: Yeah, and let’s bring in William and Malcolm. William, what do you think about
that idea?
William Laurent (WL): Yeah, the federated model is something that I have actually written
about, including writing specifically about InetSoft. Yeah, the way we traditionally look at the data
warehousing world in which we go through these costly ETL processes to extract, transform, to load the data,
and a lot of the logic is quite frankly encapsulated, not transparent, you don’t know the business
rules. So with the federated mashup, I see the potential. I am a believer in the potential for the federated
mashup and just the sense that we are able to pull data from various systems dynamically and look at the data,
have semantic virtualization that makes sense. So you don’t have to mandate consistency because quite
frankly people are looking at visual representations now of the data, and they can best understand what it is.
{{> ad20}}
How Is a Federated Data Management Model Different from Data Warehousing?
A federated data management model and data warehousing are two different approaches to managing and integrating data from multiple sources. Here are the main differences:
Federated Data Management:
-
Connects to multiple data sources in real-time, leaving the data in its original location.
-
Data is accessed and processed where it resides, without creating a centralized copy.
-
Supports data virtualization, data federation, and data integration.
-
Ideal for real-time analytics, data sharing, and collaboration.
Data Warehousing:
-
Involves copying data from multiple sources into a centralized repository.
-
Data is transformed, cleansed, and optimized for querying and analysis.
-
Supports business intelligence, reporting, and historical analytics.
-
Ideal for strategic decision-making, trend analysis, and data mining.
How a Radio Broadcasting Company Used Data Mashup to Cut Costs in Its Business Intelligence Environment
A major radio broadcasting company with a nationwide presence was struggling with inefficiencies in its Business Intelligence (BI) environment. The organization operated multiple stations across various cities, with each station responsible for gathering and analyzing data related to audience demographics, advertisement revenue, programming effectiveness, and operational costs. This generated a large volume of data across different systems, making it difficult to streamline processes, extract insights efficiently, and manage increasing costs associated with maintaining a fragmented BI environment.
To address these challenges, the company implemented a data mashup tool, significantly transforming their approach to data integration, analysis, and decision-making. By consolidating disparate data sources into a unified view, they reduced costs and enhanced operational efficiency.
The Challenge:
-
Disparate Data Sources:
The radio broadcasting company had multiple data sources across its stations. These included audience measurement systems, financial records, advertising revenue streams, web analytics for online radio services, and operational data (e.g., equipment usage, staffing, and maintenance). These systems operated in silos, making it difficult for decision-makers to gain a holistic view of the company's performance.
-
Costly and Time-Consuming Data Integration:
Traditional data integration methods, such as ETL (Extract, Transform, Load) processes, were expensive and time-consuming. The company had to invest heavily in IT resources to pull together data from different databases, spreadsheets, and legacy systems. This increased maintenance costs and slowed down the time-to-insight.
-
Slow and Fragmented Reporting:
Business users, including station managers and executives, faced delays in getting accurate, real-time reports. The fragmented nature of the data meant that analysis had to be pieced together manually, leading to errors and incomplete views of the business. Additionally, the company often had to rely on external consultants to build customized reports, driving up costs.
-
Increased Operational Costs:
The complexity of the data environment meant the company was spending significant resources on licensing multiple BI tools, maintaining infrastructure for disparate databases, and hiring personnel to manage the data flows.
The Solution: Data Mashup Tool
The company chose a modern data mashup solution to overcome these challenges. The tool they selected enabled them to integrate data from multiple sources in real-time, without requiring complex ETL processes. Instead, the data mashup tool created dynamic, on-demand connections between different data points, providing a single, unified view of the organization's performance.
Key Features of the Data Mashup Solution:
-
Real-Time Data Integration:
The data mashup tool allowed the company to pull data from various sources, including SQL databases, spreadsheets, APIs, and cloud platforms. It automatically transformed the data on-the-fly, enabling real-time integration without the need for traditional, costly ETL processes.
-
User-Friendly Interface:
With a visual drag-and-drop interface, business users (without technical expertise) could create and customize their reports. This reduced reliance on the IT department and eliminated the need for external consultants, saving time and money.
-
Scalability:
As the company continued to grow, the data mashup tool scaled effortlessly. Whether it was incorporating new data sources from acquisitions, additional cities, or new audience platforms (e.g., podcasts and online streaming), the system adapted without expensive upgrades.
-
Cost-Effective Infrastructure:
The broadcasting company consolidated its data infrastructure by moving away from multiple BI tools to a more centralized, cloud-based data mashup environment. This dramatically cut down licensing and maintenance costs.
Implementation Process:
-
Assessment of Data Sources:
The company began by conducting an audit of its existing data environment. This involved identifying all the key data sources, including advertising revenue, audience analytics, content scheduling, and operational costs.
-
Data Connection and Integration:
The data mashup tool was deployed to connect these data sources in real-time. With no need for complex data migration or transformation processes, integration was swift. The tool automatically harmonized data formats, ensuring consistency across stations and departments.
-
User Training and Empowerment:
One of the significant benefits of the mashup tool was its user-friendly nature. The company invested in minimal training for key users across various departments, allowing them to quickly adopt the tool. Station managers, analysts, and executives were able to generate reports and dashboards tailored to their specific needs without requiring IT support.
-
Real-Time Analytics and Dashboards:
The mashup tool provided intuitive dashboards that combined data from various sources into a single, unified interface. Real-time analytics helped executives monitor performance metrics such as revenue per listener, advertising effectiveness, and station-specific costs.
Results:
- Cost Savings:
- Reduction in IT Costs:
By eliminating the need for traditional ETL processes and consolidating multiple BI tools into one data mashup platform, the company saw a 35% reduction in IT costs. The streamlined system reduced the dependency on external consultants and in-house data engineers.
- Lower Licensing Fees:
Migrating to a single data mashup tool reduced the need for multiple software licenses, saving the company tens of thousands of dollars annually. The centralized platform also cut down on infrastructure costs, as the company moved much of its data to a cloud-based environment.
- Faster Decision-Making:
With real-time data integration and on-demand reporting, the company was able to make faster, more informed decisions. For example, the programming team could quickly assess the performance of specific shows and adjust content strategies, while the advertising sales team could optimize ad placements based on up-to-the-minute audience metrics.
- Improved Accuracy and Efficiency:
The elimination of manual data integration and reporting reduced errors and improved data accuracy. Business users could now access real-time, trustworthy data across all stations, leading to better operational efficiency. The time required to generate reports dropped by 50%.
- Enhanced Revenue Optimization:
With the ability to cross-analyze data from different departments, the company optimized its advertising strategies. For instance, the mashup tool allowed the marketing team to correlate audience engagement data with advertising revenue, leading to more targeted ad placements and better overall revenue generation.
- Scalable Growth:
As the company expanded into new markets, the data mashup tool easily integrated additional data sources from newly acquired stations. The system's scalability ensured that growth did not require a proportional increase in IT resources, further reducing the total cost of ownership.