Updates from March, 2015 Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:51 am on March 26, 2015 Permalink
    Tags: , , , Pipeline, , ,   

    Teradata University Network Builds Big Data Talent Pipeline 

    Registration deadline April 1 for global business analytics competition among universities
    Teradata News Releases

     
  • admin 9:51 am on March 26, 2015 Permalink
    Tags: , , , , , , ,   

    The Four Essential Truths Of Real-Time Customer Engagement: The Fourth Truth 

    RET-1008-LDELIVER.

    At this point in our story, we have a connected consumer “damsel in distress” (or “damsel that wants this dress”) interacting with a real-time engagement channel. You know who she is and what she means to you. You know what journey she’s on and what strategic marketing campaigns she’s part of. In that moment of clarity, you have performed real-time rules-based analytics, self-learning predictive analytics and strategic arbitration of our enterprise offers, and you’ve decided specifically which offers to extend and where.

    What could possibly go wrong?

    Well, as much as I hate to stop all that positive momentum, it’s now time for the fourth and final essential truth of real-time customer engagement:

    If you can’t get your message delivered at the right time, it doesn’t really matter how good it is.

    That’s right. If you can’t get your offers delivered to the consumer in real-time, all that work has been for naught.

    This is where the conversation gets channel-specific (and yet more individualized), and as true omni-channel marketers, we should cover all the interactive channels. For sake of argument, let’s consider: customer care, web (paid, earned, owned), kiosk/ATM/POS/self-service, email/SMS, social and mobile.  If anyone thinks I’m missing one, tweet me (@TDjtimmerman) and I’ll publicly issue a mea culpa.

    Customer care: Generally regarded as the best prepared channel for getting a message/offer delivered in real-time. As long as your customer care application supports the real-time selection of customer-specific offers, the agent handling the call can easily read an offer in real-time.

    Web: The channel with the greatest immediate potential, due to the sheer amount of transactional traffic volume online. Of course, the ideal scenario is a real-time customer interaction on an “owned” website where the customer is easily and uniquely identified and the dynamic content of the website is easily managed. Paid and earned websites are somewhat trickier because you don’t dictate the amount of customer identification data that you’re provided in real-time, and you often have to use technology partners to get your content/offers/messages (however limited they may be) delivered in real-time. Even so, the goal remains the same: to return an offer that ultimately drives interaction on an owned website.

    Kiosk/ATM/POS/Self-Service: Essentially, these are instantiations of websites. For the most part, all the applications that drive these devices are web-based client applications, and as such, each may be treated as an owned website. There may be a relatively anonymous “outer layer” to the application, but customer sign-on/identification/authentication must be enabled to realize true one-to-one offer individualization. Therefore, these are the perfect channels to include in loyalty program-based interactional offer decisioning and delivery. With the geo-fencing and beacon technologies available today, you can easily integrate these location-based devices/channels into the real-time customer experience.

    Email/SMS: If the content is relevant, an email/SMS can certainly be incorporated into the real-time customer engagement mix. However, while the initial email may have been marketing-driven or transactional in nature, the specific content of the generated message needs to be recalculated at the moment the email is opened (not static from the time the message was originally created) – just in case something has changed with the customer from the time the message was generated until the time the email/SMS was opened by the customer.

    Social: If you want to engage with consumers through Facebook Custom Audiences and/or have your own social sites with social teams to monitor them, you can certainly implement real-time decisioning and customer engagement tactics across these channels. Much like with paid or earned web interactions, the customer identification options can be more limited. However, that’s nothing that can’t be overcome with the right technology partners and marketing applications that can help you better understand your customers and better determine what journeys they’re on so you can make a real-time offer decision.

    Mobile: Likely the channel with the most future potential. Mobile is here to stay, and it’s only going to get more prolific, with more payment/transactional options. Mobile is, and will continue to be, more than just phones, pads and tablets. Any iOS or Android device needs apps, and when your app is on your customers’ mobile devices, you’ve opened up a brave new world for you… and for them. Apps allow you to push to mobile, which makes them even more interactive and real-time than waiting for someone to access your website from their mobile device. Are all business relationships conducive to a mobile app? Maybe not, but if yours is, you’ve got to be excited about mobile apps for real-time customer engagement.

    Each interactive channel brings its own challenges and each brings its own opportunity. But, if you can’t get your message delivered at the right time, it doesn’t really matter how good it is.

    This is the final post of a 5-part series.  Links to all related posts in the series are presented below. 

    Read the first post in this series: The Four Essential Truths Of Real-Time Customer Engagement

    Read Truth #1: The Four Essential Truths Of Real-Time Customer Engagement: The First Truth

    Read Truth #2: The Four Essential Truths Of Real-Time Customer Engagement: The Second Truth

    Read Truth #3: The Four Essential Truths Of Real-Time Customer Engagement: The Third Truth

    Read Truth #4: The Four Essential Truths Of Real-Time Customer Engagement: The Fourth Truth

    The post The Four Essential Truths Of Real-Time Customer Engagement: The Fourth Truth appeared first on Teradata Applications.

    Teradata Blogs Feed

     
  • admin 9:48 am on March 26, 2015 Permalink
    Tags: , , , Holdings, , , , ,   

    Peer Advantage Presents Enterprise Holdings Best Practices Reporting and BI Solutions 


    Teradata Web Casts

     
  • admin 9:52 am on March 25, 2015 Permalink
    Tags: , Domain, , ,   

    The Role of Domain Experts in Data Science 

    During my 30 years of analytics career, prospective employers and clients have often asked me: ‘How can you help us with data-driven insights when you have not worked in this industry before? ‘.

    Clearly, the description of data scientist as the mythical unicorn who has computer science skills, statistical knowledge and domain expertise (Figure 1) has had an impact. The proliferation of different analytics disciplines such as social network analysis, digital analytics, bio-informatics and supply chain analytics, lends weight to the argument that domain expertise definitely matters.

    figure1

    There are also anecdotes on the web of data science projects that went pear shaped because the analysts were not subject matter experts. A deeper look into these anecdotes reveals that the issues are not due to a lack of domain expertise, but due to poor data science such as over-fitting of data, bad sampling methods and unnecessary data cleansing. Still the myth that domain expertise trumps all else continues!!

    Data mining competitions such as Kaggle and KDD have demonstrated the opposite and shown how data science can be successfully outsourced to people without domain expertise. Many companies have run competitions on such diverse topics as optimizing flight routes, predicting ocean health and diabetic retinopathy detection. Data scientists with little or no expertise in the domain have responded brilliantly with useful solutions. Adam Kowalczyk and I won the KDD Cup on yeast gene regulation prediction with no background in biology. Some data scientists, such as David Vogel and Claudia Perlisch, have even won across multiple domains, indicating that data science skills are transferable across domains.

    The counter argument to Kaggle’s success is that in these competitions, the domain experts have already generated the hypothesis by posing the right business question and preparing the data (Figure 2), and the competitors need only model and test. But, in the brave new world of massive data along with the mathematical tools and computing power to crunch these numbers, old world paradigm of hypothesizing before modeling is likely to be challenged. Google has shown a whole new way of understanding the world without any a priori models or theories with their approach to language learning.

    figure2

    So, if domain expertise is not necessary for the steps of posing the business question and analytical problem definition, what about data acquisition and data preparation?

    In my experience, domain knowledge about data capture and transformation processes at the sensors can be acquired through exploration of the raw data. Often, good data scientists become subject experts just by playing with the data and asking questions to domain experts about the data anomalies. For instance, using just such a process, my analytics team in a manufacturing company identified a long standing, but previously undiscovered anomaly in the summarised sales and inventory feed from a large retailer. This anomaly materially affected the retail inventory reporting and had to be fixed programmatically. Subsequently, my data science team members were the acknowledged retail supply chain experts!!

    Domain expertise is most relevant, perhaps, in the interpretation of insights, particularly those insights gained using unsupervised learning about the workings of complex physical processes. An example of just such a situation was the use of Aster discovery platform to perform root cause analysis of failures in a multiple aircraft fleet from aircraft sensor and maintenance data. While the analysis started with no a priori model, a post priori interpretation of the results from the path analysis and the subsequent follow-up to improve aircraft safety certainly required domain expertise.

    Returning back to the original question: ‘How can you help us with data-driven insights when you have not worked in this industry before? ‘, my response is as follows.

    1. Machine learning (the intersection of computer science and statistics in Figure 1) brings a fresh perspective that leads to new insights and no prior domain knowledge can potentially be advantageous, especially in overcoming long standing domain bias.
    2. Provided the machine learners have curiosity and willingness to learn about the company and domain along with the humility to ask the domain experts about the subject, they will not only understand the domain, but through their questioning they will cross-pollinate the subject matter experts so the team as a whole is stronger.

    So, when hiring a data scientist, focus on the machine learning aspect, particularly, the desire to play with the data using a number of different techniques and languages. Consider also the analytical skills to question and solve problems iteratively. Partner the data scientists with domain experts so cross-pollination can occur. This, to me, is a better pathway for bringing data science to a business than searching for the elusive unicorn depicted in Figure 1.

    Bhavani Raskutti is the Domain Lead for Advanced Analytics Teradata ANZ . She is responsible for identifying and developing analytics opportunities using Teradata Aster and Teradata’s analytics partner solutions. She is internationally recognised as a data mining thought leader and is regularly invited to present at international conferences on Mining Big Data. She is passionate about transforming businesses to make better decisions using their data capital.

    The post The Role of Domain Experts in Data Science appeared first on International Blog.

    Teradata Blogs Feed

     
  • admin 9:47 am on March 25, 2015 Permalink
    Tags: , hits, , ,   

    SiriusXM Hits the Marketing Fast Lane 


    Teradata Web Casts

     
  • admin 9:51 am on March 24, 2015 Permalink
    Tags: , Unlocked,   

    The Value of Big Data Unlocked 

    The Value of Big Data UnlockedIt’s on every enterprise list of Things To Tackle in 2015. It’s every organization’s technological priority because it’s commonly considered important to future growth and competitive positioning. The value of Big data is big news —without a doubt.

    Even though most business executives think realizing benefits from the value of big data is long overdue, look at the low participation figures:

    •  According to a recent IDG Enterprise survey, only 14 percent of respondents said that their enterprises had already deployed big data solutions3
    • Only 44 percent of enterprises report their organizations are in the planning or implementation stage of big data solutions1
    • A significant 85 percent of executives surveyed reported facing significant obstacles in dealing with big data, including security issues, a shortage of trained staff, and the need to develop new internal capabilities.

    Arguably, all the hesitation links back to the complexity of the data and finding the solutions to manage it. If there’s so much upside, why aren’t more companies further along in their efforts to exploit their big data? Most organizations have not yet acquired the technology or expertise required to unravel the complexity much less leverage data to its full potential.

    Today, in an effort to realize real world benefits (value from big data) semi-structured data such as social profiles and Twitter feeds join unstructured data like images and PDFs to add intelligence to an organization’s structured data from its traditional databases. To further exacerbate the complexity of the situation, big data is generated at high velocity and collected at frequent intervals, making the volume of the new data types nearly unmanageable.

    Additionally, businesses need to unlock existing data from silos and gain a holistic view of all the new information so that they can make unique associations and ask important questions about customers and products. They need a technology solution that integrates their data stores, identifies behavior patterns, and draws meaningful associations and inferences. It’s important to understand that the value of big data will go beyond sophisticated reporting. It will advance from historical insight to being highly predictive, enabling managers to make the best decisions possible.

    Teradata has created an advanced solution which deals with all the complexities and hurdles. It seamlessly integrates the variety, volume and velocity of the data in an integrated or unified data architecture. It bridges the hurdles found with difficult programming languages, extreme processing needs and customized data storage. Put simply, it provides a high-performance big data analytics system easily appreciated by both IT professionals and real-world business users.

    Because Teradata’s Unified Data ArchitectureTM lets business users ingest and process data, it makes it faster to discover insights and act upon them.

    With the majority of organizations just beginning to get their feet wet, there is still sizeable competitive advantage to be gained from unlocking insights from the almost limitless cache of data. Real world experiences reveal real world advantages:

    Average Fortune 1000 companies can increase annual net income by $ 65.67 Million with an increase of just 10 percent in data accessibility2

    •  Top retailers have increased operating margins by 60 percent through monitoring customers’ in-store movements and combining that data with transaction records to determine optimal product placement, product mix and pricing3
    •  The U.S. healthcare industry stands to add $ 300 billion in revenues by leveraging big data4
    • Financial institutions are reducing customer churn by using data analytics to evaluate consumer and criminal behavior.

    A solution like Teradata’s Unified Data Architecture – where users can ask any question at any time to unlock new and valuable business insights – is a painless catalyst to discovering new competitive advantages and profit. Every organization desires higher productivity, lower costs, and an expanded horizon of new opportunities. It’s a big advantage to be able open up discovery to users across the enterprise, not only the IT elite.

    Learn more about Teradata’s Unified Data Architecture.

    1. http://www.idgenterprise.com/press/big-data-initiatives-high-priority-for-enterprises-but-majority-will-face- implementation-challenges

    2. http://www.forbes.com/sites/ciocentral/2012/07/09/will-big-data-actually-live-up-to-its-promise/2/
    3. http://www.truaxis.com/blog/12764/big-profits-from-big-data/
    4. http://www.information-management.com/news/big-data-ROI-Nucleus-automation-predictive-10022435-1.html

    The post The Value of Big Data Unlocked appeared first on Data Points.

    Teradata Blogs Feed

     
  • admin 9:51 am on March 23, 2015 Permalink
    Tags: , Priorities,   

    3 Priorities for Big Data Success 

    by Stefan Biesdorf, David Court and Paul Willmott 

    Leveraging big data and advanced analytics requires a plan. While it may sound obvious, most companies skip over the step of defining how data, analytics, frontline tools and people come together to generate business value. The power of a plan is that it provides a common language allowing senior executives, technology professionals, data scientists and managers to discuss where the greatest returns will come from and, more importantly, to choose where and how to get started. In our experience, critical priorities include:

    1. Match investment priorities with business strategy

    Integrating “stovepipes” of data across, say, transactions, operations and customer interactions can provide powerful insights, but the cost of a new data architecture and developing the many possible models and tools can be immense—and that calls for choices. There’s no substitute for serious engagement by the senior team in establishing priorities.

    2. Balance speed, cost, and acceptance

    Once investment priorities are established, it’s not hard to find software and analytics vendors who have developed applications and algorithmic models to address them. These packages can be cost-effective and easier and faster to install than internally built, tailored models. However, they often lack the qualities of an impressive app—one that’s built on real business cases and can energize managers. Planning efforts should balance the need for affordability and speed with the need for a mix of data and modeling approaches that reflect business realities.

    3. Focus on frontline engagement and capabilities

    Engaging the organization starts with the creation of analytic models that frontline managers can understand. The models should be linked to easy-to-use decision support tools and to processes that let managers apply their own experience and judgment to the outputs of models. While a few analytic approaches, such as basic sales forecasting, are automatic and require limited frontline engagement, the lion’s share will fail without strong managerial support, which is why involving managers is critical.

    Leverage the Potential

    The essence of a good strategic plan is that it highlights the critical decisions or trade-offs a company must make and defines the initiatives it must prioritize, such as emphasizing higher margins or faster growth. Organizations should address analogous issues: choosing the internal and external data to integrate, selecting analytic models and tools that will best support business goals, and building the capabilities needed to exploit this potential.

    Stefan Biesdorf is a principal in McKinsey & Company’s Munich office, David Court is a director in the Dallas office and Paul Willmott is a director in the London office. This article was excerpted from their March 2013 McKinsey Quarterly article, “Big data: What’s your plan,” available on mckinsey.com.

    Read this article and more in the Q1 2015 issue of Teradata Magazine.

    The post 3 Priorities for Big Data Success appeared first on Magazine Blog.

    Teradata Blogs Feed

     
  • admin 9:52 am on March 22, 2015 Permalink
    Tags: , , , , Watchwords   

    Top Five Watchwords for Implementing the Data Lake 

    It’s been nice to see industry discussion around liquid analytics and the data lake evolve beyond ideals and concepts and into the brass tacks of implementation.  A great example here is a story by ITBusinessEdge.com’s Loraine Lawson on “Why Data Lakes Turn Into Data Swamps.”  Loraine reiterated the “data swamp” term during an interview with Teradata’s Director, Technical Marketing, Dan Graham, whom she tapped for some valuable insights while writing her piece on how data lake implementations can easily go awry without the right strategies and resources in place.

    157223_mastering_marketing

    One way companies get bogged down is to mistakenly think of the data lake as a particular product or service.  As I’ve written before, the data lake is instead an approach to analytics that can and should involve architectures made up of various technologies and multiple platforms. As more data-driven companies come to realize this, our industry needs to be ready to help them work through the implementation checklist when setting up a liquid architecture of multiple systems, analytic techniques and programming languages.

    So here’s a way to distill things down to five watchwords when implementing the data lake, but don’t think of these as separate buckets. Think of them instead as a set of closely interrelated priorities, like signposts along the road to success in creating your own liquid architecture.

    Flexibilty
    This concept logically follows from realizing that the data lake is not a one-size-fits-all solution, but rather a suite of technologies and analytic options – like Teradata Aster Big Data Appliance, Teradata Appliance for Hadoop and Teradata Active Data Warehouse – that you should implement in the best and most customized way possible for your enterprise. Flexibility is what we’re after, and this comes from making your own choices about which technologies are best for you, and then harmonizing those options so they work seamlessly together. That latter requirement is the driving force behind our own recent release of Teradata QueryGrid, a set of intelligent connectors and product capabilities to coordinate queries across many complex resources and analytic options.  Whatever your analytic approach may be, make sure it’s flexible and adaptable, not frozen in some rigid and technology-centric solution. There’s a reason it’s called the data lake and not the data glacier.

    Access
    At the risk of adding more nautical puns to the discussion, I recommend having your data lake available at multiple depths for a broad range of users, from the highly technical data scientists to the average business professional. Democratizing access reaps value and insight for your organization by bringing more brilliant minds to the analytics table. Make sure your architecture can allow multiple points of access — from the granular staging areas where data scientists work with information from source systems in its original fidelity, to the more refined layers for aggregation and presentation like Teradata QueryGrid for business users to access, and even experiment with, data.

    Governance
    Just because we need to open the data lake to a broad community of users doesn’t mean we should do so without proper governance. Especially if you’re empowering a lot of people to manipulate data sets by attribute, location, revenue or any number of criteria that might be useful to them, you need to make sure your architecture can provide this frictionless, self-service experience while still being able to standardize your data rules and access. Otherwise you get a Wild Wild West environment of poorly orchestrated data in different formats, and this can lead to error and costly duplication of data.

    Context
    Putting data in the correct context is a governance piece that has become all-the-more important in a data lake scenario. Certain forms of data — health or financial information, for example — have enhanced requirements around things like privacy and security.  Simply loading this kind of data into Hadoop along with everything else is obviously not a good idea, but neither is locking your data down.  The key is to lock down the context of that data, not the data itself. This is one of the core principles behind our own Teradata Portfolio for Hadoop release, which provides a secure and tested data management platform for capturing, storing and refining data in ways that preserve agility and flexibility.

    Schema
    You hear a lot of misleading talk about how the data lake liberates everyone from the hassles of schema, given that we can just capture and load everything into Hadoop and figure out some other time what it all means. The truth is, while the data may be flowing freely on write into the lake without schema, the price of that freedom is the heightened need for schema on read. This is the process of applying definition to data when it’s pulled out of a stored location rather than when it goes in, and that makes metadata all the more important. In fact, effective metadata management remains a major Hadoop challenge, and that’s one of the reasons why we recently acquired assets from Revelytics, including Loom, whichhelps automatically detect, parse and format Hadoop data. Whatever your particular solution, make sure you don’t ignore metadata in what may seem like a schemaless society.

    Scott Gnau

    The post Top Five Watchwords for Implementing the Data Lake appeared first on Enlightened Data.

    Teradata Blogs Feed

     
  • admin 10:33 am on March 21, 2015 Permalink
    Tags: , , ,   

    Webinar: The New Romantic Era in Business and Tech 

    How can we make work (and life) more magical? Tim Leberecht believes that a century after the industrial age, we are facing another “great disenchantment,” this time propelled by the connected age with its growing datafication, constant digital overwhelm, and radical transparency. Business is divorced from the full expression of our humanity, and for many of us something is missing, something both essential and immeasurable that lets us see the world with fresh eyes every day: romance.
    Teradata Events

     
  • admin 9:52 am on March 21, 2015 Permalink
    Tags: , , , Madness, March, , Winner   

    Can Data Help You Choose The Next March Madness Winner? 

    NCAA BracketEvery March, office break rooms across the nation shift from chatting about things like the latest work deadlines, weekend plans and the latest photocopier breakdown to obsessing over a single subject: college basketball.

    March Madness, otherwise known as the NCAA Men’s Division 1 Basketball Tournament, pits co-worker against co-worker, friend against friend and relative against relative, as millions of people rush to fill out their “brackets,” the diagrams detailing potential game matchups from the first round to the last.

    Fans predict who will reach the top spot out of 64 competing teams. Then, round by round, the field narrows, until a fraction of the initial competitors reach the Final Four. Eventually, two teams will meet in the Championship game on April 6.

    If you predict the winners, you could claim anything from bragging rights at the water cooler, a few dollars in a sports pools… or even a big payout from the bookies in Vegas.

    But how do you actually make your picks? Do you rely on data?

    You could consult historical patterns, like team on team performance or habitual conference leaders over time. You could you focus on individual player performance over the season or over multiple seasons. But, do certain players benefit from particular skill sets or statistical leaders around them, or are they consistent, regardless of the support around them? And does coaching have an impact — style, emphasis, winning history, coaching team members and so on?

    The list of factors could be endless. That’s probably why there are so many different approaches to filling out a bracket. Some are all data, all the time. Some forget about the basketball part altogether.

    Keep in mind, though, that no matter how much you dig in and strategize with the data available to you, crazy things can happen, as in 2014, when the #7 seed Connecticut took it all and the #12 seed North Dakota State University Bison upset the #5 seed Oklahoma.

    The fact is, each of the March Madness teams is bigger than its collective set of data. The players come with hearts, minds and bodies that often surprise us with amazing — and sometimes disappointing — action when they actually get out on the floor. The numbers can only tell so much of the story.

    The same goes for marketing.

    You can collect and analyze all your data and come up with a set of approaches that seem to fit. You can predict customer behavior and interests based on the facts you have on hand.

    But just as with basketball brackets, data driven marketing is multi-faceted… and at the end of the day, it’s about humans, individuals who often do unpredictable things. People don’t always take the actions you expect. They are changing and evolving all the time. They do one thing one day, and another the next.

    Does that mean you should give up on data? Absolutely not! Your data remains your biggest asset. Just keep it in perspective.

    Use analytics tools to dig deep into your customer behaviors and draw out insights that are bigger than last click or last purchase. Then, use those insights to design marketing campaigns that meet your customer needs more effectively. Giving your customers individualized attention and deliver an exceptional experience; don’t just push them to buy.

    That way, when you experience an “upset” that doesn’t seem to match your data, you’re in a much better position to pivot to meet a new need or connect in a new way.

    That’s individualized data-driven marketing.  And unlike some of the more unorthodox bracket approaches, it’s bound to lead to a win.

    The post Can Data Help You Choose The Next March Madness Winner? appeared first on Teradata Applications.

    Teradata Blogs Feed

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel