Tagged: Reach Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:47 am on May 25, 2017 Permalink
    Tags: , , , , , , Reach, ,   

    M&E Journal – Change the Way You Engage – Using Big Data Analytics to Understand and Reach Consumers 

    In this reprint from M&E Journal, learn how Media & Entertainment companies are transforming thanks to big data analytics.
    Teradata Articles

     
  • admin 9:51 am on February 6, 2016 Permalink
    Tags: , , , Reach   

    Reach Deep Into History 

    by Andy Sanderson

    While Teradata® QueryGrid™ allows you to access data and enables processing across heterogeneous systems, including technologies from Apache™ Hadoop®, Teradata Aster, Oracle or even MongoDB, some of the most compelling uses involve multiple Teradata Database systems. When organizations use several Teradata platforms for various purposes, having direct SQL access across them, along with the ability to orchestrate processing between them, opens up new possibilities.

    Gain New Insights From Historical Data

    Increasingly stringent regulations require companies to keep data online and accessible for regulatory compliance over several years or more. Although the most frequently accessed data is the latest or most current data, that doesn’t mean that the older information is not useful or relevant. Data that’s been compiled over several years gives a rich perspective of the business, such as long-term trends and cyclical patterns.

    Because there is typically much more history data than current data, and the concurrency and usage of historical information is substantially less, it makes sense to store it on a separate system that has different performance and price characteristics: for example, on a Teradata 1000 or 2000 series warehouse.

    However, keeping historical and current data on separate systems has made it a challenge to gain unique insights that are possible only by analyzing the information together. But not any longer. Now, Teradata QueryGrid can be used to seamlessly join together all the historical and current information across multiple Teradata systems, without having to change the basic data structures and queries. This makes it possible to answer questions that could not be previously addressed so decision makers can better plan for the future.

    The business can generate a basic report that is run on the past year’s data stored on the integrated data warehouse (IDW).

    SELECT sales_date, SUM(sales_quantity) AS total_sales

    FROM samples.sales_fact

    GROUP BY 1

    ORDER BY 1;

    The query results returned 334 rows in 1.5 seconds. Now, if the business wants to run a full report on all of its data, including the historical data, Teradata QueryGrid can query information from all available years across the IDW and the historical data located on another Teradata system. The data on the historical database has the exact same column structure, but it is in a table called:sales_fact_history.

    By using a simple UNION to join the data across the systems, the Teradata QueryGrid foreign server object we have created in this example is called td1000:

    SELECT sales_date, SUM(sales_quantity) AS total_sales

    FROM (

    SELECT * FROM samples.sales_fact

    UNION ALL

    SELECT * FROM samples.sales_fact_history@td1000) all_sales

    GROUP BY 1

    ORDER BY 1;

    The query plan, which executes using Teradata QueryGrid, followed these steps:

    • The query was initiated from the IDW.
    • The local query on the IDW ran to select qualifying rows.
    • A remote query on the td1000 ran to select qualifying rows.
    • All rows were returned from the td1000 and placed in a spool on the IDW.
    • The IDW merged both data sets and applied aggregation to all rows.
    • The IDW applied grouping and ordering.

    This Teradata QueryGrid query resulted in 1,336 rows, and 14 million rows were transferred back to the IDW. The query took about 30 seconds to complete.

    Optimize With Push-Down Processing

    Just as you can optimize a query on a single system to perform better, you can also optimize Teradata QueryGrid queries. You need to take into consideration the performance of the individual query pieces that will happen on each system as well as the performance of the network between them.

    One of the most powerful features of Teradata QueryGrid is its ability to orchestrate processing across multiple systems and “push down” the processing when desired. This revised sales report query leverages that capability:

    SELECT sales_date, SUM(sales_quantity) AS total_sales

    FROM samples.sales_fact

    GROUP BY 1

    UNION ALL

    SELECT *

    FROM FOREIGN TABLE (

    SELECT sales_date, SUM(sales_quantity) AS total_sales

    FROM samples.sales_fact_history

    GROUP BY 1)@td1000 old_sales

    ORDER BY 1;

    To utilize Teradata QueryGrid for push-down processing, you use the keywords “FOREIGN TABLE.” This lets you initiate a subquery on the secondary system, which is everything shown in the parentheses in the preceding query.

    In this case, the 1000 series system aggregates the results for its data and sends just the results instead of all the raw data rows. This allows you to minimize the data transferred across the network as well as use the processing power of that system.

    The query plan for this push-down query using Teradata QueryGrid followed these steps:

    • The query was initiated from the IDW.
    • A local query on the IDW ran to select qualifying rows: sales_quantity aggregated.
    • A remote query on the td1000 ran to select qualifying rows: sales_quantity aggregated.
    • Qualifying rows were returned from the td1000 and placed in spool on the IDW.
    • The IDW merged both data sets.
    • The IDW applied ordering.

    The push-down version of this Teradata QueryGrid query resulted in 1,336 rows, with just 1,002 rows transferred. The total elapsed time was about four seconds. As the results demonstrate, there is a dramatic increase in performance when using the push-down capabilities of Teradata QueryGrid. But, as with any optimization, results will depend on your particular environment, such as your systems, data and network. You may also want to use push-down processing to leverage idle resources in order to free up capacity on the IDW, even if the overall performance is slower.

    Uncover More Value

    Using the push-down capabilities of Teradata QueryGrid lets you orchestrate queries to fit your business needs and data architecture. The solution enables seamless, high-performance, multi-system analytics while supporting many different platforms.

    Leveraging a company’s deep historical data to uncover new answers to business problems is just one of the ways you can use Teradata QueryGrid across multiple database systems. As more businesses continue to adopt the solution, they will find more ways it can help them uncover insights and get even more value from all their data.

    Andy Sanderson is the product marketing manager for many of the Teradata® Unified Data Architecture™ products, including Teradata QueryGrid™.

    This article originally appeared in the Q4 2015 issue of Teradata Magazine. For more on Teradata QueryGrid and how to economically scale your database environment, visit TeradataMagazine.com.

    The post Reach Deep Into History appeared first on Magazine Blog.

    Teradata Blogs Feed

     
  • admin 9:46 am on November 10, 2015 Permalink
    Tags: , extends, , Reach, Realm,   

    Teradata Extends Analytics Reach into IoT Realm 

    Teradata Press Mentions

     
  • admin 9:51 am on March 4, 2015 Permalink
    Tags: , Organisation, Reach,   

    Should Your Organisation Reach for the Cloud? 

    In the recent O’Reilly paper by Mark Barlow studying the industry trends in Migrating Big Data Analytics to the Cloud , the survey data reveals some fascinating trends. Some of these are logical and/or marry up with understanding of anecdotes. However 1 or 2 findings are a little surprising.

    dataanalytics_cloud

    One of the findings derived from the survey data reveals that respondents are more likely to move cutting-edge, less structured technologies and applications to the cloud. For instance, they are more willing to move Data Discovery environments and Analytic Sandboxes to the cloud over Relational Databases, Data Marts and DR environments.

    This, to me, is a little counter-intuitive. Wouldn’t you, as an IT Manager be more inclined to move your stable, reliable and trusted technologies and applications to the cloud first? Don’t these new Big Data and Analytics technologies and data pose more risks? Wouldn’t you be more willing to move your commodity assets outside the tent before those high-value components?

    Another slightly baffling dichotomy brought out is that Data Storage and Management is the most planned application to be moved to the cloud but in the same survey Data Privacy and Security requirements was the most popular reason to NOT migrate to the cloud. So why use the cloud to store and manage high volumes of sensitive data when privacy and security is a major concern?

    ‘Organisations are just dipping their tow in the water and trialling the technology’.

    I think the hidden answer to these unexpected results is brought out in the paper’s conclusions. The overriding phenomenon is that organisations are unwilling to jump into the cloud lock, stock and barrel when the cloud pool is relatively empty and those that are utilising the cloud have not committed fully themselves. Organisations are just dipping their tow in the water and trialling the technology. This helps the organisation adjust, iron out the wrinkles, convince the internal doubters and assess how they can best use the cloud.

    Teradata and the Cloud

    Therefore, non-critical applications like back-office discovery analytics and sandboxes are perceived as good guinea-pigs for a general trial of the cloud. Also there is a need to trial those new technologies and the cloud provides a quick and easy way to perform those trials. Data Storage and Management is a very simple application with few and simple touch-points. Sure, there is a lot of data to move to the cloud but if it’s just for Archive storage with very few users the security risks are probably quite low.

    Teradata’s Public cloud, launched in 2014, is growing quickly with many customers are using it for Test and Development and Disaster Recovery use cases.

    Moreover, flexibility seems to be a key driver for the cloud: scaling up and down, switching on and off, moving from one platform to another are all becoming more and more important for businesses trying to gain quick competitive advantage in a fast-paced marketplace.

    Teradata’s Public cloud, launched in 2014, is growing quickly with many customers are using it for Test and Development and Disaster Recovery use cases. However it’s still early days and Teradata anticipates it’s cloud footprint to continue to grow steadily as the industry approaches a tipping-point when organisations will view Cloud as a mature, low-risk approach for deploying their IT Assets. Take a look at Teradata’s Cloud offering – you might be surprised.

    So when is your organisation going to dip their tow in the water? Or should I say when are you going to reach for the …Clouds?

    Greg Taranto is a Senior Pre-Sales Consultant at Teradata ANZ. Greg specialises in designing and tailoring Data Warehouse solutions for organisations across many industries. Greg’s extensive background in Data Warehouse Architecture, Design and Implementation along with his business solutions experience allow him to bring many worlds together to achieve optimal results for Teradata’s customers and prospective customers. You can also connect with Greg on Linkedin

    The post Should Your Organisation Reach for the Cloud? appeared first on International Blog.

    Teradata Blogs Feed

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel