Tagged: Access Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:54 am on September 28, 2016 Permalink
    Tags: Access, , , ,   

    Enabling in database processing with SAS ACCESS to Teradata 

    Teradata Videos

  • admin 9:52 am on October 26, 2015 Permalink
    Tags: Access, , , paradigm, ,   

    Teradata QueryGrid Changes the Data Access Paradigm 

    Q3-15_Applied Solutions_QueryGridby Dirk Anderson

    There has been a lot of recent industry buzz about Teradata® QueryGrid™, and for good reason. It is one of those rare products that fundamentally changes the way organizations work.

    The access layer allows queries that run on the Teradata Database to seamlessly access information on external servers such as Apache™ Hadoop®, Oracle Database and MongoDB, which offers a huge time savings. This ability opens up vast opportunities for data scientists and power users who spend a lot of time gathering and assembling data for analysis.

    For example, data scientists conducting research with information from the Teradata Database and other data sources often spend a significant amount of time extracting data from disparate sources. Additional time is spent moving the data into SAS or another location for analysis. Plus, when dozens of users load information into data stores, enterprise data is often duplicated on various departmental or personal platforms. Not only is this a capacity concern, but having multiple copies of the data on individual machines also poses a security risk. Teradata QueryGrid solves the problem.

    Simplify a Complex Process

    With Teradata QueryGrid, data scientists can develop a single script using SQL syntax that runs on the Teradata Database to join all the data sources together. This removes the complexities of connecting to external servers, writing extraction scripts, transferring data, allocating storage space, translating data types into formats recognized by the local machine, compressing the local data and remembering to encrypt sensitive information.

    When data scientists need to run SAS or R analytics, they can run a rich set of in-database functions within the massively parallel architecture of the Teradata Database to achieve extreme performance. The results can be saved and shared securely among colleagues with Teradata Data Lab. This process simplifies coding by reducing (or eliminating) the amount of data that needs to be stored locally and dramatically increases the speed-to-market for analytics.

    Remove the Bottlenecks

    Each new generation of processing nodes brings faster CPUs and increased I/O bandwidth for processing and moving data. Still, the biggest performance challenge in the current era of big data is channeling information into and out of the Teradata platform.

    Single-channel data movement is not effective for handling large volumes of data, and can cause bottlenecks. Teradata QueryGrid solves that by facilitating high-volume, multi-channel data movement between Teradata, Teradata Aster and Hadoop platforms. Ideally, the Hadoop and Teradata platforms should be connected using BYNET® over InfiniBand.

    Another bottleneck stems from the expense and time-to-market required for a new ETL project. Developing traditional ETL to load the data warehouse can be so prohibitive that it is cost-effective only for high-value data. With the access layer, scripts can be rapidly developed to select data directly from source tables and insert or upsert them directly into target tables. This process can often be completed with just a few simple SQL statements. Since coding is reduced and simplified, maintenance and support are easier, which greatly reduces costs and accelerates speed-to-market.

    Although Teradata QueryGrid is not a replacement for ETL tools, the capability gap is narrowing. Teradata has been adding ETL-like capabilities with each release of the Teradata Database. For example, Teradata Database 15.0 includes the ability for SQL to invoke non-SQL languages such as Ruby, Python and Perl. These enhancements dramatically shift the balance in determining which ETL processes can be reasonably performed in-database with Teradata QueryGrid and which should be done using traditional methods.

    From Hours to Minutes

    The access layer can move data to and from an external server. This bi-directional capability opens up options for users. For instance, a single script can extract data from a source system, run referential integrity checks on the Teradata Database and send any data that failed the checks back to the source system for review.

    Bi-directional data movement is beneficial when information needs to be loaded in both the enterprise-class Teradata server for general users and also in a Teradata Aster server for data scientists. A single script loads the data onto the enterprise server, at which point the data is integrated with other information, and the result is pushed over to the Teradata Aster Database. This process used to take hours. With Teradata QueryGrid, it now takes just minutes.

    Leverage the Solution’s Full Potential

    Teradata QueryGrid is causing a paradigm shift in the way organizations work with and benefit from the Teradata Database. By understanding the potential of the access layer and how to better leverage the solution, companies are better positioned for their future data warehousing and big data analytics projects.

    Dirk Anderson is a senior vice president of a major financial institution and a Teradata Certified Master. He has worked hands-on with Teradata solutions for more than 20 years. 

    For this article as well as more technical solutions and insights visit TeradataMagazine.com.



    The post Teradata QueryGrid Changes the Data Access Paradigm appeared first on Magazine Blog.

    Teradata Blogs Feed

  • admin 9:51 am on October 1, 2015 Permalink
    Tags: Access, , , Expand, , , ,   

    Teradata Uses Open Source to Expand Access to Big Data for the Enterprise 

    By Mark Shainman, Global Program Director, Competitive Programs

    Teradata’s announcement of the accelerated release of enterprise-grade ODBC/JDBC drivers for Presto opens up an ocean of big data on Hadoop to the existing SQL-based infrastructure. For companies seeking to add big data to their analytical mix, easy access through Presto can solve a variety of problems that have slowed big data adoption. It also opens up new ways of querying data that were not possible with some other SQL on Hadoop tools. Here’s why.

    One of the big questions facing those who toil to create business value out of data is how the worlds of SQL and big data come together. After the first wave of excitement about the power of Hadoop, the community quickly realized that because of SQL’s deep and wide adoption, Hadoop must speak SQL. And so the race began. Hive was first out of the gate, followed by Impala and many others. The goal of all of these initiatives was to make the repository of big data that was growing inside Hadoop accessible through SQL or SQL-like languages.

    In the fall of 2012, Facebook determined that none of these solutions would meet its needs. Facebook created Presto as a high-performance way to run SQL queries against data in Hadoop. By 2013, Presto was in production and released as open source in November of that year.

    In 2013, Facebook found that Presto was faster than Hive/MapReduce for certain workloads, although there are many efforts underway in the Hive community to increase its speed. Facebook achieved these gains by bypassing the conventional MapReduce programming paradigm and creating a way to interact with data in HDFS, the Hadoop file system, directly. This and other optimizations at the Java Virtual Machine level allow Presto not only to execute queries faster, but also to use other stores for data. This extensibility allows Presto to query data stored in Cassandra, MySQL, or other repositories. In other words, Presto can become a query aggregation point, that is, a query processor that can bring data from many repositories together in one query.

    In June 2015, Teradata announced a full embrace of Presto. Teradata would add developers to the project, add missing features both as open source and as proprietary extensions, and provide enterprise-grade support. This move was the next step in Teradata’s effort to bring open source into its ecosystem. The Teradata Unified Data Architecture provides a model for how traditional data warehouses and big data repositories can work together. Teradata has supported integration of open source first through partnerships with open source Hadoop vendors such as Hortonworks, Cloudera, and MapR, and now through participation in an ongoing open source project.

    Teradata’s embrace of Presto provided its customers with a powerful combination. Through Teradata QueryGrid, analysts can use the Teradata Data Warehouse as a query aggregation point and gather data from Hadoop systems, other SQL systems, and Presto. The queries in Presto can aggregate data from Hadoop, but also from Cassandra and other systems. This is a powerful capability that enables Teradata’s Unified Data Architecture to enable data access across a broad spectrum of big data platforms.

    To provide Presto support for mainstream BI tools required two things: ANSI SQL support and ODBC/JDBC drivers. Much of the world of BI access works through BI toolsets that understand ANSI SQL. A tool like QlikView, MicroStrategy, or Tableau allows a user to easily query large datasets as well as visualize the data without having to hand-write SQL statements, opening up the world of data access and data analysis to a larger number of users. Having robust BI tool support is critical for broader adoption of Presto within the enterprise.

    For this reason, ANSI SQL support is crucial to making the integration and use of BI tools easy. Many of the other SQL on Hadoop projects are limited in SQL support or utilize proprietary SQL “like” languages. Presto is not one of them. To meet the needs of Facebook, SQL support had to be strong and conform to ANSI standards, and Teradata’s joining the project will make the scope and support of SQL by Presto stronger still.

    The main way that BI tools connect and interact with databases and query engines is through ODBC/JDBC drivers. For the tools to communicate well and perform well, these drivers have to be solid and enterprise class. That’s what yesterday’s announcement is all about.

    Teradata has listened to the needs of the Presto community and accelerated its plans for adding enterprise-grade ODBC/JDBC support to Presto. In December, Teradata will make available a free, enterprise class, fully supported ODBC driver, with a JDBC driver to follow in Q1 2016. Both will be available for download on Teradata.com.

    With ODBC/JDBC drivers in place and the ANSI SQL support that Presto offers, anyone using modern BI tools can access data in Hadoop through Presto. Of course, certification of the tools will be necessary for full functionality to be available, but with the drivers in place, access is possible. Existing users of Presto, such as Netflix, are extremely happy with the announcement. As Kurt Brown, Director, Data Platform at Netflix put it, “Presto is a key technology in the Netflix big data platform. One big challenge has been the absence of enterprise-grade ODBC and JDBC drivers. We think it’s great that Teradata has decided to accelerate their plans and deliver this feature this year.”

    The post Teradata Uses Open Source to Expand Access to Big Data for the Enterprise appeared first on Data Points.

    Teradata Blogs Feed

  • admin 9:51 am on April 14, 2015 Permalink
    Tags: Access, , , , , ,   

    Think Big Dashboard Engine Powers Fast Access to Hadoop 

    Dashboard Engine for Hadoop makes business intelligence reporting available for data lakes
    Teradata News Releases

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc