Updates from June, 2015 Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:54 am on June 30, 2015 Permalink
    Tags: , Bacon, , , Kevin   

    What Do Kevin Bacon and Connection Analytics Have in Common? 

    Kevin-BaconRemember the “Six Degrees of Kevin Bacon”—the game that links celebrities to Kevin Bacon in six or fewer steps by finding movie relationships between various actors? While the game was just for fun, the principles behind it can offer real value to businesses.

    Connection analytics visualizes relationships and shows how people influence others in their networks. Understanding the links between real-world networks, whether they’re people, products, machines or processes—and the influence exerted by others—can provide businesses with opportunities to boost customer retention, increase revenues, improve brand reputation and more.

    Read more about the advantages of connection analytics in the Q2 2015 issue of Teradata Magazine. 

    Carly Schramm
    Assistant Editor
    Teradata Magazine



    The post What Do Kevin Bacon and Connection Analytics Have in Common? appeared first on Magazine Blog.

    Teradata Blogs Feed

  • admin 9:55 am on June 29, 2015 Permalink
    Tags: , , PoweringUp,   

    Powering-Up Utility Insights with Analytics 

    From reducing operating costs to meeting regulatory requirements, today’s Utilities are faced with a new wave of business challenges.

    One major business shift is the increasing amount of data collection, as well as the investment in applications and tools to leverage that data. However, many Utilities companies are behind the curve their ability to integrate that data in meaningful and measurable ways.

    So, what are the prime areas where Teradata sees opportunity to drive change with data? Here are four major areas in which Utilities companies can use analytic insight to make big gains:

    Operations and Financial Management

    Uncovering operational inefficiencies is just the tip of the iceberg to an increased view of your company’s finances. By integrating data from across operations, you can better invest funds across the business. Additionally, this can help in service investments and priorities—yet another cost benefit.

    Energy Efficiency and Demand Response

    Increased customer participation in EE and DR programs depend on improvements in customer segmentation, program performance, and measurement. Such improvements lead to reduced revenue losses, enhanced insight for regulatory bodies and even improved load management and power quality.

    Load Management and Power Quality

    By using the increased transparency and insights enabled by data and analytics, Utilities can gain remarkable control over load management and power quality. That control can reduce generation costs, head off potentially large capital outlays, and increases revenue.

    Regulatory and Rate Design

    In order to maintain healthy margins, Utilities require readily accessible and accurate data. This information is not only crucial for the company itself but for regulatory commissions as a means to justify rate design. By providing insight into the accuracy of those variances by customer and rate types, utilities can put in top line adjustments as appropriate.

    An experienced leader in data-driven technology is an important partner in applying these ideas and developing an appropriate plan to address your company’s challenges.

    Read our new Utilities Point-of-View on how you make your data pay dividends here.

    When you know more, you can do more. Teradata.com/Utilities

    The post Powering-Up Utility Insights with Analytics appeared first on Industry Experts.

    Teradata Blogs Feed

  • admin 9:53 am on June 28, 2015 Permalink

    Big Data for All 

    by Sri Raghavan why-teradata

    Organizations understand the value of big data analytics and discovery, which is why they continue to invest in data scientists and business analysts who can generate incredible insights. Unfortunately, barriers still exist that keep analytics and discovery models from being consumable by the masses.

    Business users desperately want to utilize discovery and big data insights, but to do so, they require an easy-to-use, easy-to-access, interactive visual interface to leverage the information. Typically, big data platforms that have any market presence today are not accessible to the vast majority of business stakeholders because there is no easy way to deploy advanced analytics models.

    Another obstacle is that the platforms do not make the models easily repeatable, which would allow users to focus on operationalizing the insights rather than figuring out how to conduct the analytics. Organizations are therefore forced to invest in hard-to-obtain and expensive resources to consistently staff their analytics initiatives. Over time, this strategy becomes difficult to sustain.

    A solution is needed to overcome these traditional challenges. It needs to extend and enable the value of discovery analytics to a wider business user and BI community that goes beyond SQL-savvy analysts and data scientists.

    Deliver Value to a Larger Community

    With the adoption of big data apps, a user base can grow and make analytics easier for more people. SQL users of discovery platforms can employ apps to capture innovative analytic logic, then deploy and share the information with a broad group across the enterprise. Once big data analytics and insights are in business users’ sights, the number of users and workloads will increase—and so could the ROI.

    Two distinct groups will significantly benefit:

    • When the platform provides a graphical user interface (GUI) and standards-based app building and configuration, IT personnel can leverage them to allow users such as data scientists, developers and analysts to seamlessly and quickly build, configure, deploy and share big data apps.
    • The business rank and file, from the C-suite to the line of business managers and analysts, will need to utilize an interactive, Web-based user experience. This will allow business people to analyze, view and share results from big data apps and focus quickly on operationalizing the insights to encourage innovation across the entire organization.

    The solution also needs to be compatible with BI tools such as Tableau, Microstrategy and others. The underlying discovery platform should provide a REST API to allow BI and visualization tools to call the apps for easy integration with these tools and other open-source visualization packages to extend the capabilities of the solution. In addition, the visualizations should include Sankey, Chord, Sigma or a similar offering to execute the pre-built analytics functions. However, they are not intended to replace the visualization capabilities of the BI tools.

    The Power of Big Data Apps

    Big data apps are scalable, reusable, industry-focused applications that complement discovery platforms to address specific business questions for organizations across all industries. The apps target particular business challenges such as fraud, customer churn and loyalty. They can also be used for process optimization, purchase paths and cart abandonment, patient treatment paths, influencer behavior, call center optimization, review analysis and other important use cases.

    The exploitation of big data apps will significantly enhance any organization’s ability to improve its culture of data-driven decision making. This happens by enabling all stakeholders to share in the deployment, consumption and operationalization of analytics. The result is a substantial reduction in the IT department’s burden of maintaining and managing complex solutions. At the same time, the apps are not hindered by traditional technology barriers, so they can increase the ability of the business to proactively address crucial challenges on its way to greater innovation, reduced overall expenditure and higher profitability.  

    Read the full article and more in the Q2 2015 issue of Teradata Magazine.

    Sri Raghavan is a senior product marketing manager at Teradata. He has more than 18 years of experience in advanced analytics.





    The post Big Data for All appeared first on Magazine Blog.

    Teradata Blogs Feed

  • admin 9:51 am on June 27, 2015 Permalink
    Tags: , BestInClass, , , , , ,   

    CSS Insurance: Using Big Data & Marketing Analytics to be Best-In-Class in Customer Satisfaction 

    “If you don’t have any data, you don’t know, it’s like you are driving a car and you are blind, and that’s the same as the data.  It’s the basis. You can manage by emotions, but not by facts. I like to combine emotions with facts because it gives you much more power.” – Volker Schmidt, CIO & CMO


    Volker Schmidt, CIO & CMO

    More power with data; that’s how CSS Insurance is fulfilling their mission to be best-in-class in customer service by 2018. Founded in 1899, this Switzerland based insurance group serves 1.77 million people and is the country’s leading health, accident and property insurance company. When the Teradata customer engagement team sat down with CIO & CMO Volker Schmidt we understood very quickly why they are leading the way.  CSS is innovating with data and solutions every day!  Because insurance is commodity product CSS knows they have to differentiate with service or price; CSS is choosing service. Utilizing sophisticated analytics with Teradata Aster ™, Teradata Marketing Applications and their integrated data warehouse, CSS Insurance built the “Process House” for every single customer interaction.  That’s billing, the call center, claims – everything! Understanding every process that a client has to take has forced the company to have transparency on all customer interactions. When it comes to customer satisfaction the “Process House” drives everything. From a client point of view, this is customer experience management. From the CSS point of view, this is process and quality management.  When CSS improves each step/interaction, they ultimately improve customer satisfaction and thus, the brand.

    Screen Shot 2015-06-25 at 2.41.00 PM“If a customer is not really satisfied with the service, this information gets back to the data warehouse and we are sending out another lead to a professional client rep to contact the unsatisfied customer to solve his problem immediately.  With a process like that, we reduce the churn of the unsatisfied customer dramatically.” – Volker Schmidt, CIO & CMO

    They’re also using inbound calls to serve their customers – by offering them the next best product, an offer that is individualized to the customer by Teradata Customer Interaction Manager (CIM).

    “If you’re doing a campaign based on event-driven marketing, it’s often an Screen Shot 2015-06-25 at 2.46.34 PMoutbound campaign.  We are receiving more than 2M telephone calls in our call center a year, and we are using these contacts. The service reps get the campaign represented on the CRM system; he is asking the client if he needs more insurance, because of the next best product analytics we’re doing.”  Volker Schmidt, CIO & CMO

    Using web analytics and customer data, CSS Insurance is able to understand their customers even better to increase customer satisfaction.

    “We have been running our internet portal for about ten months – you get personalized information about the client, how he is interacting, and we are also gathering this kind of data and performing analytics.  If you are just analyzing your website with anonymous clients, it’s not so interesting.  Now, if you know who your client is and what he’s doing on the internet portal and what is he looking for, you have much more information. You can connect with the client and use it for other campaigns; we redesign webpages, we improve the customer process flows.” – Volker Schmidt, CIO & CMO

    Without dictating to customers specific treatments, CSS Insurance is able to send customers information and recommendations on diagnosis and treatments for the conditions they are searching.

    On the horizon for CSS? Groundbreaking stuff. In order to reach their best-in-class vision for 2018, CSS will be using Teradata Aster™ to translate speech to text from customer calls and then perform sentiment analysis to come up with a fully automated customer satisfaction score for each (remember they average 2M incoming calls per year).

    An unsatisfied customer tells all his friends about his experience with CSS, and that’s not good.  If he is satisfied he also tells everyone about the experience of CSS. We’re not asking any questions, ‘Hey, how satisfied are you?’  We just figure it out and call him.  We would like to bring this data from the telephone system into Aster, have an algorithm to analyze the sentiment of the speech, and send out a lead via the CRM or with the CIM system to our front, and then somebody can solve the problem of our customers.” – Volker Schmidt, CIO & CMO, CSS Insurance

    Congratulations and thank you to the entire CSS Insurance team for sharing your story of success!


    The post CSS Insurance: Using Big Data & Marketing Analytics to be Best-In-Class in Customer Satisfaction appeared first on Insights and Outcomes.

    Teradata Blogs Feed

  • admin 9:46 am on June 27, 2015 Permalink
    Tags: ,   

    Teradata Database 15 10 

    Teradata Brochures

  • admin 9:55 am on June 26, 2015 Permalink
    Tags: , , , ,   

    CSS Insurance Using Big Data and Marketing Analytics 

    Teradata Videos

  • admin 9:51 am on June 26, 2015 Permalink
    Tags: ,   

    Why We Love Presto 

    Concurrent with acquiring Hadoop companies Hadapt and Revelytix last year, Teradata opened the Teradata Center for Hadoop in Boston. Teradata recently announced that a major new initiative of this Hadoop development center will include open-source contributions to a distributed SQL query engine called Presto. Presto was originally developed at Facebook, and is designed to run high performance, interactive queries against Big Data wherever it may live — Hadoop, Cassandra, or traditional relational database systems.

    Among those people who will be part of this initiative and contributing code to Presto include a subset of the Hadapt team that joined Teradata last year. In the following, we will dive deeper into the thinking behind this new initiative from the perspective of the Hadapt team. It is important to note upfront that Teradata’s interest in Presto, and the people contributing to the Presto codebase, extends beyond the Hadapt team that joined Teradata last year. Nonetheless, it is worthwhile to understand the technical reasoning behind the embrace of Presto from Teradata, even if it presents a localized view of the overall initiative.

    Around seven years ago, Ashish Thusoo and his team at Facebook built the first SQL layer over Hadoop as part of a project called Hive. At its essence, Hive was a query translation layer over Hadoop: it received queries in a SQL-like language called Hive-QL, and transformed them into a set of MapReduce jobs over data stored in HDFS on a Hadoop cluster. Hive was truly the first project of its kind. However, since its focus was on query translation into the existing MapReduce query execution engine of Hadoop, it achieved tremendous scalability, but poor efficiency and performance, and ultimately led to a series of subsequent SQL-on-Hadoop solutions that claimed 100X speed-ups over Hive.

    Hadapt was the first such SQL-on-Hadoop solution that claimed a 100X speed-up over Hive on certain types of queries. Hadapt was spun out of the HadoopDB research project from my team at Yale and was founded by a group of Yale graduates. The basic idea was to develop a hybrid system that is able to achieve the fault-tolerant scalability of the Hive MapReduce query execution engine while leveraging techniques from the parallel database system community to achieve high performance query processing.

    The intention of HadoopDB/Hadapt was never to build its own query execution layer. The first version of Hadapt used a combination of PostgreSQL and MapReduce for distributed query execution. In particular, the query operators that could be run locally, without reliance on data located on other nodes in the cluster, were run using PostgreSQL’s query operator set (although Hadapt was written such that PostgreSQL could be replaced by any performant single-node database system). Meanwhile, query operators that required data exchange between multiple nodes in the cluster were run using Hadoop’s MapReduce engine.

    Although Hadapt was 100X faster than Hive for long, complicated queries that involved hundreds of nodes, its reliance on Hadoop MapReduce for parts of query execution precluded sub-second response time for small, simple queries. Therefore, in 2012, Hadapt started to build a secondary query execution engine called “IQ” which was intended to be used for smaller queries. The idea was that all queries would be fed through a query-analyzer layer before execution. If the query was predicted to be long and complex, it would be fed through Hadapt’s original fault-tolerant MapReduce-based engine. However, if the query would complete in a few seconds or less, it would be fed to the IQ execution engine.

    presto graphic blogIn 2013 Hadapt integrated IQ with Apache Tez in order avoid redundant programming efforts, since the primary goals of IQ and Tez were aligned. In particular, Tez was designed as an alternative to MapReduce that can achieve interactive performance for general data processing applications. Indeed, Hadapt was able to achieve interactive performance on a much wider-range of queries when leveraging Tez, than what it was able to achieve previously.

    Figure 1: Intertwined Histories of SQL-on-Hadoop Technology

    Unfortunately Tez was not quite a perfect fit as a query execution engine for Hadapt’s needs. The largest issue was that before shipping data over the network during distributed operators, Tez first writes this data to local disk. The overhead of writing this data to disk (especially when the size of the intermediate result set was large) precluded interactivity for a non-trivial subset of Hadapt’s query workload. A second problem is that the Hive query operators that are implemented over Tez use (by default) traditional Volcano-style row-by-row iteration. In other words, a single function-invocation for a query operator would process just a single database record. This resulted in a larger number of function calls required to process a large dataset, and poor instruction cache locality as the instructions associated with a particular operator were repeatedly reloaded into the instruction cache for each function invocation. Although Hive and Tez have started to alleviate this issue with the recent introduction of vectorized operators, Hadapt still found that query plans involving joins or SQL functions would fall back to row-by-row iteration.

    The Hadapt team therefore decided to refocus its query execution strategy (for the interactive query part of Hadapt’s engine) to Presto, which presented several advantages over Tez. First, Presto pipelines data between distributed query operators directly, without writing to local disk, significantly improving performance for network-intensive queries. Second, Presto query operators are vectorized by default, thereby improving CPU efficiency and instruction cache locality. Third, Presto dynamically compiles selective query operators to byte code, which lets the JVM optimize and generate native machine code. Fourth, it uses direct memory management, thereby avoiding Java object allocations, its heap memory overhead and garbage collection pauses. Overall, Presto is a very advanced piece of software, and very much in line with Hadapt’s goal of leveraging as many techniques from modern parallel database system architecture as possible.

    The Teradata Center for Hadoop has thus fully embraced Presto as the core part of its technology strategy for the execution of interactive queries over Hadoop. Consequently, it made logical sense for Teradata to take its involvement in the Presto to the next level. Furthermore, Hadoop is fundamentally an open source project, and in order to become a significant player in the Hadoop ecosystem, Teradata needs to contribute meaningful and important code to the open source community. Teradata’s recent acquisition of Think Big serves as further motivation for such contributions.

    Therefore Teradata has announced that it is committed to making open source contributions to Presto, and has allocated substantial resources to doing so. Presto is already used by Silicon Valley stalwarts Facebook, AirBnB, NetFlix, DropBox, and Groupon. However, Presto’s enterprise adoption outside of silicon valley remains small. Part of the reason for this is that ease-of-use and enterprise features that are typically associated with modern commercial database systems are not fully available with Presto. Missing are an out-of the-box simple-to-use installer, database monitoring and administration tools, and third-party integrations. Therefore, Teradata’s initial contributions will focus in these areas, with the goal of bridging the gap to getting Presto widely deployed in traditional enterprise applications. This will hopefully lead to more contributors and momentum for Presto.

    For now, Teradata’s new commitments to open source contributions in the Hadoop ecosystem are focused on Presto. Teradata is only committing to contribute a small amount of Hadapt code to open source — in particular those parts that will further the immediate goal of transforming Presto into an enterprise-ready, easy-to-deploy piece of software. However, Teradata plans to monitor Presto’s progress and the impact of Teradata contributions. Teradata may ultimately decide to contribute more parts of Hadapt to the Hadoop open source community. At this point it is too early to speculate how this will play out.

    Nonetheless, Teradata’s commitment to Presto and its commitment to making meaningful contributions to an open source project is an exciting development. It will likely have a significant impact on enterprise-adoption of Presto. Hopefully, Presto will become a widely used open source parallel query execution engine — not just within the Hadoop community, but due to the generality of its design and its storage layer agnosticism, for relational data stored anywhere.


    daniel abadi crop BLOG bio mgmtDaniel Abadi is an Associate Professor at Yale University, founder of Hadapt, and a Teradata employee following the recent acquisition. He does research primarily in database system architecture and implementation. He received a Ph.D. from MIT and a M.Phil from Cambridge. He is best known for his research in column-store database systems (the C-Store project, which was commercialized by Vertica), high performance transactional systems (the H-Store project, commercialized by VoltDB), and Hadapt (acquired by Teradata). http://twitter.com/#!/daniel_abadi.

    The post Why We Love Presto appeared first on Data Points.

    Teradata Blogs Feed

  • admin 9:48 am on June 26, 2015 Permalink
    Tags: , , , , ,   

    The Catch 22 In Cyber Defense More Isn t Always Better 

    Teradata Articles

  • admin 9:48 am on June 26, 2015 Permalink
    Tags: , Closer, , , , ,   

    Individualized Insights Bring Retailers Closer to Their Customers 

    Teradata White Papers

  • admin 9:48 am on June 26, 2015 Permalink
    Tags: ,   

    DMC Product Overview FR 

    Teradata Brochures

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc