Updates from October, 2014 Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:51 am on October 31, 2014 Permalink
    Tags: , , , , , ,   

    Magic Quadrant for Integrated Marketing Management – Gartner 2014 


    Teradata Analyst Reports

     
  • admin 9:51 am on October 31, 2014 Permalink
    Tags: bunk, , either, ,   

    Big Data: not unprecedented but not bunk either – part IV 

    In the course of the Big Data blog series, I have tried to identify what Big Data really means, the challenges that organisations which have been successful in exploiting it have overcome – and how the consequences of these challenges are re-shaping Enterprise Analytic Architectures. Now I want to take a look at two of the key questions that we at Teradata – and indeed the Industry at large – will have to address in the coming months and years as distributed architectures become the norm.

    The rise of the “Logical Data Warehouse” architectural pattern is a direct consequence of the five key Big Data challenges that I discussed in part 3 of this series of blogs. Precisely because there is no single Information Management strategy – never mind a single Information Technology – that addresses all five challenges equally well, it is increasingly clear that the future of Enterprise Analytical Architecture is plural and that organisations will need to deploy and integrate multiple Analytic platforms.

    The first key question, then, that the Industry has to answer is: what types of platforms – and how many of them?

    Actually, whilst that formulation is seductively simple, it’s also flawed. So much of the Big Data conversation is driven by technology right now that the whole industry defaults to talking about platforms when we should really discuss capabilities. Good Enterprise Architecture, after all, is always, always, always business requirements driven. To put the question of platforms before the discussion of capabilities is to get things the wrong way around.

    So let’s re-cast that first question: how many and which type of Analytic capabilities?

    At Teradata, we observe that the leading companies that we work with which have been most successful in exploiting Big Data increasingly use manufacturing analogies to discuss how they manage information.

    In manufacturing, raw materials are acquired and are subsequently transformed into a finished product by a well-defined manufacturing process and according to a design that has generally been arrived at through a rather less well-defined and iterative Research and Development (R&D) process.

    Listen carefully to a presentation by a representative of a data-driven industry leader – the likes, for example, of Apple, eBay, Facebook, Google, Netflix or Spotify – and time-and-again you will hear them talk about three key capabilities: the acquisition of raw data from inside and outside the company; research or “exploration” that allows these data to be understood so that they can be exploited; and the transformation of the raw data into a product that business users can understand and interact with to improve business process. When you get all done, the companies that compete on Analytics focus on doing three things well: data acquisition; data R&D; and data manufacturing. Conceptually at least, 21st century organisations need three Analytic capabilities to address the five challenges that we discussed in part 3 of this blog, as represented in the “Unified Data Architecture” model reproduced below.

    UDA_1_Oct 2014

    It is important to note at this point that it doesn’t necessarily follow that a particular organisation should automatically deploy three (and no more) Analytical platforms to support these three capabilities. The “staging layers” and “data labs” in many pre-existing Data Warehouses (a.k.a.: Data Manufacturing), for example, are conceptually similar to the “data platform” (a.k.a.: Data Acquisition) and “exploration and discovery platform” (a.k.a.: Data R&D) in the Unified Data Architecture model shown above – and plenty of organisations will find that they can provide one or more of the three capabilities via some sort of virtualised solution. And plenty more will be driven to deploy multiple platforms where conceptually one would do, by, for example, political concerns or regulatory and compliance issues that place restrictions on where sensitive data can be stored or processed. As is always the case, mapping a conceptual architecture to a target physical architecture requires a detailed understanding of functional and non-functional requirements and also of constraints. A detailed discussion of that process is not only beyond the scope of this blog – it also hasn’t changed very much in the last several years, so that we can safely de-couple it from the broader questions about what is new and different about Big Data. Functional and non-functional requirements continue to evolve very rapidly – and some of the constraints that have traditionally limited how much data we can store, for how long and what we can do with it have been eliminated or mitigated by some of the new Big Data technologies. But the guiding principles of Enterprise Architecture are more than flexible enough to accommodate these changes.

    So much for the first key question; what of the second? Alas, “the second question” is also a seductive over-simplification – because rather than answer a single second question, the Industry actually needs to answer four related questions.

    Deploying multiple Analytical platforms is easy. Too easy, in fact – anticipate a raft of Big Data repository consolidation projects during the next decade in exactly the same way that stovepipe Data Mart consolidation projects have characterized the last two decades. It is the integration of those multiple Analytical platforms that is the tricky part. Wherever we deploy multiple Analytical systems, we need to ask ourselves:

    a) How will multiple, overlapping and redundant data sets be synchronised across the multiple platforms? For example, if I want to store 2 years history of Call Detail Records (CDRs) on the Data Warehouse and 10 years history of the same data on a lower unit-cost online archive technology, how do I ensure that the overlapping data remain consistent with one another?

    b) How do I provide transparent access to data for end-users? For example and in the same scenario: if a user has a query that needs to access 5 years of CDR history, how do I ensure that the query is either routed or federated correctly so that the right answer is returned – without the user having to understand either the underlying structure or distribution of the data?

    c) How do I manage end-to-end lineage and meta-data? To return to the manufacturing analogy: if I want to sell a safety critical component – the shielding vessel of a nuclear reactor, for example – I need to be able to demonstrate that I understand both the provenance and quality of the raw material from which it was constructed and how it was handled at every stage of the manufacturing process. Not all of the data that we manage are “mission-critical”; but many are – and many more are effectively worthless if we don’t have at least a basic understanding of where they came from, what they represent and how they should be interpreted. Governance and meta-data – already the neglected “ugly sisters” of Information Management – are even more challenging in a distributed systems environment.

    d) How do I manage the multiple physical platforms as if they were a single, logical platform? Maximising availability and performance of distributed systems requires that we understand the dependencies between the multiple moving parts of the end-to-end solution. And common integrated administration and management tools are necessary to minimize the cost of IT operations if we are going to “square the circle” of deploying multiple Analytical platforms even as IT budgets are flat – or falling.

    At Teradata, our objective is to lead the industry in this evolution as our more than 2,500 customers adapt to the new realities of the Big Data era. That means continuing to invest in Engineering R&D to ensure that we have the best Data, Exploration and Discovery and Data Warehouse platforms in the Hadoop, Teradata-Aster and Teradata technologies, respectively; witness, for example, the native JSON type that we have added to the Teradata RDBMS and the BSP-based Graph Engine and Analytic Functions that we have added to the Teradata-Aster platform already this year. It means developing and acquiring existing and new middleware and management technologies like Teradata Unity, Teradata QueryGrid, Revelytix and Teradata Viewpoint to address the integration questions discussed in this blog. And it means growing still further our already extensive Professional Services delivery capabilities, so that our customers can concentrate on running their businesses, whilst we provide soup-to-nuts design-build-manage-maintain services for them. Taken together, our objective is to provide support for any Analytic on any data, with virtual computing to provide transparent orchestration services, seamless data synchronization – and simplified systems management and administration.

    If our continued leadership of the Gartner Magic Quadrant for Analytic Database Management Systems is any guide, our Unified Data Architecture strategy is working. More importantly, more and more of our customers are now deploying Logical Data Warehouses of their own using our technology. Big Data is neither unprecedented, nor is it bunk; to paraphrase William Gibson “it’s just not very evenly distributed”. By making it easier to deploy and exploit a Unified Data Architecture, Teradata is helping more-and-more of our customers to compete effectively on Analytics; to be Big Data-Driven.

    Teradata Blogs Feed

     
  • admin 9:47 am on October 31, 2014 Permalink
    Tags: , , , ,   

    Big Data Integration and Analytics for Cyber Security 


    Teradata White Papers

     
  • admin 9:46 am on October 31, 2014 Permalink
    Tags: , , , , , , , Scratch   

    How 7-Eleven Built its Digital Guest Engagement Program from Scratch 

    Teradata Press Mentions

     
  • admin 9:51 am on October 30, 2014 Permalink
    Tags: , , , HDFS, ,   

    Breaking down Hadoop Lingo Part 1: HDFS 

    I have just come off another Hadoop training course last week this time centered around Hive and Pig. Keeping up to date on what’s happening in the Hadoop space is time exhausting. Just recently Teradata announced a partnership with the other big Hadoop player Cloudera.

    Therefore keeping track of the bugs, releases, what other people are building, how it is being used and where the platform is heading is a never ending course of reading and research. In my previous blogs I’ve covered the value of Hadoop and how important it is to have a metadata strategy for Hadoop.

    Many people have a vague understanding of what Hadoop does and the business benefits it provides, but others need to delve into the detail. Over the next few blogs, I’m going to cover some of the basic individual components of Hadoop in detail. Explain what they do, some use cases and why they are important. The best approach I think would be to start from the ground and then move up. Therefore blog #1 will focus on HDFS (Hadoop Distributed File System).

    The purpose of HDFS is to distribute a large data set in a cluster of commodity linux machines in order to later use the computing resources on the machines to perform batch data analytics. One of the key significant attractions of Hadoop is it’s ability to be run on cheap hardware and HDFS is the component that provides this capability. HDFS provides a very high throughout access to the data and is the perfect environment for storing large data sets. The throughput rates makes it great for quickly landing data from multiple data sources such as sensor’s, RFID and web log data.

    Each HDFS cluster contains the following:

    • NameNode: Runs on a “master node” that tracks and directs the storage of the cluster.
    • DataNode: Runs on “slave nodes,” which make up the majority of the machines within a cluster. The NameNode instructs data files to be split into blocks, each of which are replicated three times and stored on machines across the cluster. These replicas ensure the entire system won’t go down if one server fails or is taken offline—known as “fault tolerance.”
    • Client machine: neither a NameNode or a DataNode, Client machines have Hadoop installed on them. They’re responsible for loading data into the cluster, submitting MapReduce jobs and viewing the results of the job once complete.

    WORM– Write Once Read Many. HDFS uses a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. There is a plan to support appending-writes to files in the future.

    The following diagram downloaded from the Apache site outlines the basics of the HDFS architecture.

    HDFS

    Diagram 1- The HDFS architecture

    Data Replication within HDFS

    HDFS stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. The NameNode makes all decisions regarding replication of blocks.

    Accessibility of HDFS

    I’ve been asked on several occasions on whether the storage of files in Hadoop is proprietary to only those applications that can access it. In fact the opposite is true. HDFS can be accessed in many different ways. Natively, HDFS provides a Java API for applications to use. A C language wrapper for this Java API is also available. In addition, an HTTP browser can also be used to browse the files of an HDFS instance.

    data lake best practices

    Storage volumes

    Another interesting question that is often asked is what is the volume of data that Hadoop can hold? Well how long is a piece of string? At a minimum, a 3 cluster datanode environment according to the Teradata Hadoop appliance can hold 12.5TB per data node. That’s 37.5TB of storage at a minimum. Then add in an average compression factor of 3x and all of a sudden we are looking at 112TB of data storage for a minimum Hadoop configuration. That’s some serious storage!

    Therefore in summary, HDFS is often called the “secret sauce” of Hadoop. It is the layer where the data is stored and managed. Think of it like a standard file storage system with an ability to provide data replication across commodity hardware devices. A minimal install of Hadoop has a Namenode that manages the environment (Metadata file location etc) and then multiple datanodes where the data is stored in chunks across as many datanodes available.

    Ben Davis is a Senior Architect for Teradata Australia based in Canberra. With 18 years of experience in consulting, sales and technical data management roles, he has worked with some of the largest Australian organisations in developing comprehensive data management strategies. He holds a Degree in Law, A post graduate Masters in Business and Technology and is currently finishing his PhD in Information Technology with a thesis in executing large scale algorithms within cloud environments.

     

    Teradata Blogs Feed

     
  • admin 9:47 am on October 30, 2014 Permalink
    Tags: , , Showrooming, Uncovers,   

    Showrooming Uncovers a New World of Retail Opportunities 


    Teradata White Papers

     
  • admin 9:46 am on October 30, 2014 Permalink
    Tags: , , , tipped,   

    Wearables tipped to reinvent retail sector 

    Teradata Press Mentions

     
  • admin 9:44 am on October 30, 2014 Permalink
    Tags: , ,   

    Connecting Brands to Consumers 


    Teradata Brochures

     
  • admin 9:51 am on October 29, 2014 Permalink
    Tags: , Arthur, Lisa, , , , ,   

    Teradata Marketing Applications CMO Lisa Arthur Named DMA Marketer of the Year 

    Teradata Corp. (NYSE: TDC), the big data analytics and marketing applications company, today announced that Lisa Arthur, CMO, Marketing Applications, has been named Marketer of the Year for 2014, by the Direct Marketing Association (DMA).
    Teradata News Releases

     
  • admin 9:51 am on October 29, 2014 Permalink
    Tags: 20/20, , ,   

    If The Future Is Now, What Does 2020 Have In Store For Marketers? 

    DeLoreanIn the “Back to the Future” trilogy, Dr. Emmett “Doc” Brown, the inventor of the time travel machine, comes from “the future.” That future is, believe it or not, October 2015 – only one year from now.

    Sure, Doc has a time machine built from a DeLorean, but the 2015 portrayed in the movies makes no mention of smartphones, social media, marketing drones, etc. Some might argue that the technology most of us carry around in our pockets today is more impressive than anything the creators of “Back to the Future” could have dreamed up.

    So, here’s an interesting thought experiment: Based on the marketing technology available today, what would life be like for Doc if he came from slightly further in the future – say, 2020? What changes are you expecting over the next five years? Here’s my take… in the form of a memo.

    From:  Dr. Emmett Brown

    To:  Marty McFly

    Date:  October 15, 2020

    Dear Marty,

    As you know, my eyes suck.  It started about age 45.  Thought I had dodged it. I thought wrong.  Crazy hair first, extremely farsighted second.  Luckily the new Apple i8 comes with an auto-lens that corrects for my horrible vision without me having to put on my glasses to use the phone.  Nothing so far from Apple on fixing crazy hair, but at least I can see my phone. :^)

    Sure, the Galaxy phone still has a bigger screen than the Apple, but I couldn’t see it without my glasses.  Apple once again has taken individual usability to a whole new level.  Additionally, one really cool side effect of the automatic visual correction is security. Unless the person sitting next to me has the exact vision impairment as I, my screen looks all blurry to them.

    However, I still carry my glasses. And I can just as easily suspend the auto-lens feature on my phone when I’m wearing my corrective lenses.  And my glasses are awesome. They’re actually a new model of the Google Glass with all the creepy features taken out.  Everybody hated the original Google Glass.  They looked stupid. They made everyone think they were being surreptitiously recorded. Fortunately, the engineers at Google listened and got it right by the third release. They finally just built the Google Glass features into a standard pair of eye glasses. Now, you don’t have to wear an additional appliance on your head because the internet is fully integrated into standard eyeglasses.

    It also helps that battery life in these devices got better. U-Beam finally took off in the US and that means all these little gadgets charge up automatically, so you’re never caught having to find a power outlet to charge your eyeglasses. Add in the digital wallet and “swipe & pay” purchase technologies that are part of devices today and no one would be caught dead without their biometrically secure personal mobile device (iPhone, glasses, watch, bracelet, jacket, backpack or mini key fob).  The sheer convenience of waving your phone or watch or keys over a scanner to pay for things has revolutionized payment technologies around the world. Many people have truly achieved a cashless lifestyle.  Sure, cash is still around but there’s getting to be less and less and less of it, and the youth of today actually laugh at anybody carrying coins around.  They make me feel like such a dinosaur.  Kids don’t ask parents for money; they ask for credits.

    But, I’m just describing the interactional devices and their availability and connectivity. Amazingly, it’s the way that we can interact with different brands and companies that’s truly revolutionary.

    Once connectivity was ubiquitous, brands needed to evolve their marketing campaigns and how they engaged with customers.  Granted, they could still send a piece of direct mail with a special offer or promotion, but then they’d be missing many of the opportunities that present themselves. The successful brands of today have taken digital marketing and customer engagement to a whole new level.

    For instance, location-based marketing is the first area of revolutionary change, and it has been transformed in two major ways.

    First, imagine a customer standing somewhere in town and seeing an advertisement on a billboard, in a taxi cab, at a bus stop, on the side of a building, on a poster in the window or at the entrance to a shopping mall.  If that ad has something the customer can scan or a short code he can text, the company placing that ad knows roughly where the customer is located, based on the code associated with the ad. Then, the company can engage by sending location specific offers.

    Even more interesting, however, are GPS-related location-based offers. In the previous scenario, the vendor only knows where the customer is located because the customer sent a specific code. In this second scenario, vendors are waiting for prospective customers to get within a certain range of a location before they send their offers.

    This is great if you’re a store owner. You could set up a “come see me net” within a certain proximity of your location, and anytime a customer or prospect gets within that radius you could automatically send a real time marketing message to the recipient with an appropriate offer. Anytime somebody got close enough to make it practical, you could reach out and engage them.

    Now, imagine that you’re me and you love pizza. When I get to restaurant row in town, my glasses (or phone, or key fob, etc.) start receiving offers from the different pizza joints I’ve opted in with. The really clever ones are even able to respond in real-time to the other guys’ offers and win my pizza business. I like just getting close and seeing what all their specials and incentives are for the night.

    And then there’s biometrically-based marketing, another area of revolution.

    So many of these smart devices (because they’re so ubiquitous and powerful) need to be highly secure. I can’t afford to have somebody run off with my key fob and be able to “swipe and go” with a 12-pack… or my new DeLorean! :^)  So, all these devices have a biometric sensor built in to determine ownership.

    The first benefit of biometric identification is security and flexibility. My daughter can borrow my iPhone to go to the mall and buy something, but when she’s holding my phone it has a payment limit, when I’m holding my phone it doesn’t.  Cool, huh?

    The second (and absolutely coolest) benefit of biometric identification systems is health and wellness. Think FitBit with an integrated EKG. These things know how far I walk or ride my bike.  They know when I need to slow down because my heart-rate is too high. They know when I need to stop working and get a snack because my blood sugars are dropping. In fact, I haven’t passed out in months! They can also alert me to early heart attack symptoms and automatically engage my doctor and, if needed, emergency response personnel.

    Remember when we thought 2015 was futuristic and amazing? It’s even better now!

    See you soon.

    Your pal,

    Doc

    Teradata Blogs Feed

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel