Tagged: Environment Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:46 am on November 23, 2015 Permalink
    Tags: , , Environment, ,   

    Teradata Simplifies Critical Data in IoT Environment 

    Teradata Press Mentions

     
  • admin 9:52 am on October 12, 2015 Permalink
    Tags: , DevOpsFriendly, Environment   

    Create a DevOps-Friendly Environment 

    Q3-14_Hands On_business-and-IT-professionalsby Mike Coutts

    DevOps is an emerging concept that provides a bridge between traditional development groups and their operations counterparts so they can work as one contiguous unit within the IT environment. This approach emphasizes collaboration and integration between developers and operations professionals to improve predictability, efficiency and effectiveness of operational processes, which are typically automated to some degree.

    DevOps expects that developers will explicitly write their code so operations will be able to collect logs, events and alerts. The goal is for the logs, event records and performance analysis to lead to better implementation, whether by changing the fundamental data model or by adjusting the queries associated with a given application.

    The first requirement of DevOps is that developers treat even the first line of code they write as being ready for production deployment with end-to-end performance analytics available for every component and application. The development can be conducted using client-based environments along with, or instead of, cloud-based options in which developers code, build, test and release in a continuous fashion.

    Operations will increasingly be done via a cloud-like model with applications being deployed, operated and monitored from anywhere the team requires. The operations group needs to be able to identify issues as they arise, feed this information back to the developers and then work with them to resolve problems.

    Introducing UDASQL

    Organizations that leverage the Teradata® Unified Data Architecture™ have the advantage of already possessing a fundamental component for DevOps—one or more large repositories to store and analyze their operational data. Previously, to leverage these data storage resources, developers were required to embed logging such as query banding directly into their code. However, this meant that the logging sometimes did not get done or was done in an inconsistent manner, which makes operational monitoring of the logs more difficult.

    To make it easier for developers in the Unified Data Architecture environment to focus on their primary goal of creating and executing good SQL without having to worry about logging, Teradata created a simple SQL execution engine called udaSQL that’s based on the Apache Ant™ SQL Task.

    Ant was chosen as the basis for the engine for two reasons:

    • The Java-based approach makes it operating-system agnostic, which allows deployment across different platforms.
    • Ant takes a different approach to other scripting languages. It inherently stops on a failure as opposed to requiring the developer to add code to check for a failure or other errors. This makes the developers’ task simpler since they only need to actively acknowledge points where they may expect to get an error code. For example, attempting to drop a table that does not exist would be an acceptable error that can be ignored through the use of the Ant directive onError=”continue”.

    The udaSQL embeds various logging and Query Bands inside the environment so both development and operations will get a consistent set of information. Essentially, the tools provide for consistency.

    Simple Life for Developers

    The Ant SQL Task can certainly be used as is. However, udaSQL enables a simplified environment for developers that satisfies the needs of DevOps by embedding logging and query banding into the engine rather than making the developer think about how to do it.

    A comprehensive range of version control checking, local log management and query banding is included within the uda.core.xml file and some associated Java code, which is held within udasql.jar. Developers do not need to deal with any of them. Instead, they rely on operations to monitor any related Unified Data Architecture applications.

    These applications describe a batch process in which a series of SQL statements can be executed in a prescribed sequence within a single Ant target. This can be done as a series of udaSQL tasks with single or multiple SQL statements. Or, through the use of the src=”filename” attribute, a file containing a series of SQL statements can be imported and executed in sequence. These files can be existing, simple Basic Teradata Query (BTEQ) scripts, which allows for the reuse of existing SQL logic within this DevOps-friendly environment.

    Use Volatile Tables

    One interesting aspect of the DevOps approach, compared to traditional scripting of sequences of BTEQ commands, is that a single session is maintained throughout the entire run of a given target. This allows for the use of volatile tables, which would not be possible across individual BTEQ scripts due to the logon/logoff behavior associated with each script.

    The same environment that supports the development of these batch applications can also support operations. While udaSQL will provide for a consistent set of local log files for every run of a given application, which can be used for local debugging, the biggest advantage comes from the automatic creation and application of Query Bands. This shows an excerpt from the log file:

    run:

    [UdaSQL] SET QUERY_BAND =

    ‘udaAppName=HelloWorld;

    udaAppInst=​     20140401122257615-

    10; udaAppTarget=run;

    udaAppLogFile=/Users/

    xx123123/Projects/

    DeveloperPlatform/

    UdaSQL/test/log/exec.

    xml.20140401122257615-10.

    log; udaPlatformVersion=

    ​           15.00.00.00;

    udaAppVersion=unknown;

    udaAppProduction=false;

    udaAppGitDirty=true;’ FOR

    SESSION;

    Every query run by this application will now be tagged in DBQL with this Query Band. Because of this, the operations team can identify not only problem queries through Teradata Viewpoint portlets such as Query Monitor, but also all associated queries by searching for the unique udaAppInst value within DBQL, as shown here:

    [SELECT t1.queryid,

    t1.queryband(FORMAT ‘X(20)’)

    FROM

    db

    c.dbqlogtbl t1 WHERE

    GetQueryBandValue(t1.

    queryband, 0,

    ‘udaAppInst’) =

    ‘20140422094527325-55’

    AND t1.queryband IS NOT

    NULL;].

    Support for Any Applications System

    As operations activities become more pervasive throughout the Unified Data Architecture, capabilities will become further embedded in Teradata Viewpoint as portlets that can be used to find problem queries and design decisions. More importantly, the portlets will find the original application—and its developer—that provided these queries/designs and allow operations to feed that information back to the developers.

    Ultimately, the DevOps approach will support any size organization and application environment. It will better enable developers to interact with decision support systems while bringing developers together with the operations department to function as a single entity from an
    IT perspective.

    Mike Coutts is the chief technology officer for the Teradata Enabling Solutions (TES) organization. He is responsible for all aspects of TES architecture and technology. 

    Visit TeradataMagazine.com for this article and others covering Teradata’s latest developments.

     

     

    The post Create a DevOps-Friendly Environment appeared first on Magazine Blog.

    Teradata Blogs Feed

     
  • admin 9:51 am on July 8, 2015 Permalink
    Tags: , , Environment, , ,   

    A Data Warehouse and Hadoop Partnership Enhances the Data Environment 

    True or false:

    • Apache™ Hadoop® and the data warehouse should be separate silos.
    • Hadoop should be a layer that feeds the data warehouse.
    • Hadoop can replace the data warehouse.

    If you answered false for all three statements, you are correct! In fact, Hadoop and the data warehouse should not compete against each other; they should be thought of as complementary solutions that work in tandem.

    It’s important to understand the range of abilities and benefits of both solutions. For example, Hadoop supports new workloads and new data types that are not currently available in most databases, while BI workloads are a better fit for data warehousing because they are best served by relational databases. Once the business determines the benefits each solution can provide, it can determine how the technologies can work together to enhance the existing data environment.

    Read this article about the ins and outs of Hadoop and the critical role of the data warehouse in the Q32014 issue of Teradata Magazine.

    Carly Schramm
    Assistant Editor
    Teradata Magazine

     

    The post A Data Warehouse and Hadoop Partnership Enhances the Data Environment appeared first on Magazine Blog.

    Teradata Blogs Feed

     
  • admin 9:46 am on May 30, 2015 Permalink
    Tags: , , Environment, , , , ,   

    From Keeping the Lights on to Driving More Value from Your Analytical Environment 


    Teradata Web Casts

     
  • admin 10:34 am on May 12, 2015 Permalink
    Tags: , , Environment, , , , , , , ,   

    Webinar: Managed Services: From Keeping the Lights on to Driving More Value from Your Analytical Environment 

    Gain expert insight from Louise O’Neill, Partners for Teradata Managed Services Center of Expertise (CoE), and Judy Dobson, Teradata Managed Services Delivery Partner, as they discuss the many ways Teradata Managed Services can give you a competitive edge.
    Teradata Events

     
  • admin 9:57 am on April 10, 2015 Permalink
    Tags: , , , Environment, , , , ,   

    Multi State Provider OSF HealthCare Builds Teradata Environment for Advanced Analytics 

    With Teradata Professional Services, will soon introduce big data-driven insight across marketing, finance, clinical care.
    Teradata News Releases

     
  • admin 9:47 am on October 27, 2014 Permalink
    Tags: , Environment, Mature,   

    The Stages of Building a Mature IMM Environment 


    Teradata White Papers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel