Tagged: Going Toggle Comment Threads | Keyboard Shortcuts

  • admin 9:56 am on September 7, 2017 Permalink
    Tags: , , , , , , Going, having, , , Those   

    Going to the cloud? Benefit from the amazing experiences of those who are having success at PARTNERS 2017 

    Latest imported feed items on Analytics Matters

  • admin 9:51 am on August 26, 2016 Permalink
    Tags: , , Going, , Places,   

    Teradata Aster Analytics Going Places: On Hadoop and AWS 

    Speeds time to value and benefits big data users everywhere as the most advanced multi-genre analytics can now build on any investment.
    Teradata United States

  • admin 9:51 am on July 6, 2016 Permalink
    Tags: , Going,   

    Going Big with Small Data 

    Latest imported feed items on Analytics Matters

  • admin 9:51 am on January 22, 2016 Permalink
    Tags: , , Going, ,   

    7 Digital Marketing Trends In 2016 That Are Going to DIE 

    Digital trends 2016 that will die

    If you are a digital marketer, think back 2-3 years and try to remember the trends that were ruling our world back them. How far we have come since? Which of those trends are still dominating? Which trends fell by the wayside?

    To answer this question, I Interviewed three digital marketing experts: Kent Lewis of Anvil , Nicholas Scalice of Earnworthy and Liz Rodriguez of Design House. With these three accounts as well as my own combined thoughts I put this list of 7 digital marketing trends that are going to die in 2016.

    7 Digital Trends On the Chopping Board for 2016

    1.Marketing on outdated or irrelevant lame platforms

    As Kent Lewis points out, some ideas were never that good. Despite being acquired by Twitter, Vine is still dying a slow death and should be off marketers’ radar by year’s end. The 6 second format is too short to pack a punch and the novelty has worn off.
    digital marketing trends 2016

    Similarly Foursquare/Swarm and similar “check in” location-based apps have become increasingly less relevant in their current format. While location-based marketing will continue to grow and evolve, these platforms may not make the cut this year.

    2. Content with no strategy

    Consumers (and business decision makers are jaded by the flood of information arriving in their social feeds daily. The only way to cut through the noise is to create timely, relevant and remarkable content that engages, whether that is on social platforms or in search results. Creating content based on a broader strategy will be essential to sustaining awareness and engagement over time. Ad-hoc content marketing will die in 2016… Or the companies that subscribe to that strategy will pay a price if they don’t evolve.

    3. The use of stock images by websites

    Trends continue to show videos and personalized photography yield higher conversions. Which proves to be evident that both brands and businesses alike are transitioning in order to provide users with a rich and personalized experience, thus ditching the overused generic stock images.

    4. Email blasts

    According to Nicholas Scalice, we will see fewer marketers using “email blasts” in the traditional one-size-fits all approach in 2016. Instead, marketers will use email automation and personalization to create email campaigns that are highly customized to a specific individual or small group. Marketing automation software has gotten to the point where it is easy and affordable for in-house marketers to begin using it and getting results and personalization is the way to go for higher conversions. Providing emails that are customized as well as individualized for your users is crucial today. It is a chance with the brand to connect with the user in a very personal way and in turn set up a long-term committed relationship.

    digital marketing trends 2016

    5. Outsourcing content

    According to Liz Rodriguez, we can expect to see a decline in outsourcing content in 2016. Content that is personalized and tailored to a brand and business is critical as users expect much more from online content. We expect to see less outsourcing to content farms and more in- house curated content strategies in 2016.

    6. Huge banner ads

    The problem here is that CTR (Click through rate) has much to be desired currently holding an average clkickthrough rate across all formats at 0.06 %. One reason for the decline could be due to the fact that the web is on the decline but the mobile equivalent for banner ads is mobile popups which are also ineffective. The problem here is not the platform rather the medium. In fact, people have found a way to “cope” with these banner ads and it’s called “banner blindness”. Essentially users are completely avoid looking at banner ads and it turn develop a blindness to all sorts of advertising. Marketers need to rather focus on native advertising and content marketing. Provide the user with content value first rather than just enticing them with an offer.

    Digital Marketing Trends 2016 banner blindness
    Source: tintup.com

    7. Sending campaign traffic to traditional website pages

    You should under no circumstances send traffic to your website’s home page. Homepage’s are inundated with information with many actions to take. If you have a specific purpose in mind then that is what you want to highlight. Instead, send your visitors to a landing page that is specifically designed with a single goal a need anything else related to that action. Landing page software has become so popular; marketers now have the ability to quickly and easily throw a customized landing page up for each campaign, all without the need of a designer. A good landing page will result in good ROI.

    Bottom Line

    As marketers and as customers we are evolving with the ever changing technology at hand. The question is not if we will adapt rather how quickly we will be able to adapt and be able to keep up the technological advancements. What’s hot today might be considered old-fashioned and outdated tomorrow. More importantly – what worked in the past might not work in the future. Technology is not the only thing changing; our customers are changing too, in how they perceive and connect with our brands.

    As you’re building your marketing plan for 2016, keep these “dying” trends in mind. Don’t count on what worked for your before but rather learn what has changed and come up with new ideas and new ways to leverage the technology at hand.

    Is there anything you are going to do differently this year after reading this article? Let me know in the comments below or continue this discussion on twitter @yaelkochman

    The post 7 Digital Marketing Trends In 2016 That Are Going to DIE appeared first on Teradata Applications.

    Teradata Blogs Feed

  • admin 9:51 am on November 15, 2015 Permalink
    Tags: , , , Going, ,   

    4 Reasons Oil & Gas Companies Are Going To Fail In A Big Data World 

    In Gartner’s latest Hype Cycle, you won’t find the term “big data” listed anymore – because it’s no longer considered hype. Big data has made it all the way from its emergence in West Coast dotcoms to East Coast financial institutions, Far East manufacturing companies, and many more diverse places and industries around the globe.

    Doing a quick Google search for ”big data” and Oil and Gas, you’d think that these worlds have merged now too. But no. Not only are Oil Companies not there yet, they are in danger of missing out on the whole opportunity.

    Here are four serious reasons why:

    1. Oil companies still manage their business data like librarians

    Or should I say, museum curators?

    To run the gamut from exploration to development, to production, there are many different formats of business data to be managed. Some are documents – engineering drawings from the development phase – and are managed as such. Some are physical things – rocks, fluid samples – that need to be catalogued and archived as physical things.

    But a lot of it is digital data, and oil companies are not even successfully taking advantage of this data that is already available in digital format. Instead of loading digital data in an easily accessible format, oil companies store the original measurement (and any contextual data) for posterity, as a single unit.

    Like a book in a library. Or a rock in a core store.

    But if you don’t make the data readily available for analytics, how can you make data-driven decisions?

    2. Oil companies just want to buy applications

    Oil and Gas retains a strong preference of choosing to buy end-to-end data management solutions off the shelf, especially in subsurface.

    Commonly we hear: “IT and data management infrastructure are not core business for us – we will not develop any custom solution”. But if you look at the industries and organisations who are benefiting the most from big data analytics and data-driven businesses –the absolute opposite is true; if what differentiates your company from your competition is how well you can turn your available data into insights, then this is core business.

    It gets worse when we consider workflows that regularly need to take in data from outside the thick walls of the subsurface domain – how can you perform repetitive, integrated studies across reservoir and production data without a data management framework that spans all of Exploration and Production (E&P)?

    3. Oil companies have lost their (geo)technical capability

    The inventors of the Raspberry Pi were concerned our children’s understanding of computing would be how to use an iPhone or a word processor rather than how to write programmes themselves.

    Tools like Petrel are replacing the holistic approach and even deterring people from testing science-driven hypotheses.

    Cast your mind back to the days before the integrated workstation interpretation suites, when it was important to understand first principles. But we are losing these capabilities every day – the long-threatened “Big Crew Change” is now visible daily as oil companies contract under low oil prices.

    The result is a lack of candidates to become the upstream data scientists that can discover new insights in the available data. If nobody in the Oil Company can apply the science, then analytical discovery just can’t happen.

    4. Oil companies implement IT in geological time

    The oil business is a strange one to outsiders. The financial numbers – both revenues and costs – are astronomical, the uncertainty is extremely high and the time to profit on a new project is long. Decisions made today may not take effect for a decade.

    In the North Sea, for example, if you discover a new oil field today, you are unlikely to see first oil from it for 8 years. What will the oil price be then? The world demand? And will the technology chosen in today’s Front End Engineering Design (FEED) study still be a good choice when the field enters its second decade of production? Who knows.

    In complete contrast, over in Dotcom land everything is now. Companies like eBay are constantly carrying out A-B tests on their website, constantly tweaking and changing their offering – continuous incremental improvement is the norm.

    The big data technology landscape is evolving fast, and this is not the time to pick a technology and version and standardise for the future. Especially if your data formats and analytical techniques are different from the ones prioritised by the Dotcoms.

    The only sure-fire way to ensure you get a big data strategy that works for you is to join in –build some systems, load some data, join the open source communities, test out new strategies, push the limits, and commit back. It certainly wouldn’t hurt your career prospects!

    If – as I suspect – oil companies are not willing to show up and take part, there is a strong chance that the big data technologies that emerge the winners will not meet their needs. And that will be a huge opportunity lost.

    The post 4 Reasons Oil & Gas Companies Are Going To Fail In A Big Data World appeared first on International Blog.

    Teradata Blogs Feed

  • admin 9:52 am on February 11, 2015 Permalink
    Tags: , Going, ,   

    Going Beyond Hadoop So Where to Next? 

    In case you haven’t realised it yet, it’s 2015. The year ahead for technology promises yet more advances. We will see slimmer more vibrant TV’s, data sensors becoming inbuilt into everything that we use (cars, toothbrushes, homes etc) and I dare say a plethora of new smart phones that will be just that little bit more intelligent, faster and packed with extras that half of us will never use.

    But what about enterprise technology and the evolving nature of data analytics? Certainly over the past few years we have been on the upward curve of the Gartner hype cycle for analytics. In particular we have seen the Hadoop market literally explode with a hive of activity as a result of organisations wanting to get more insights and results out of their data.

    Figure 1: Gartner Hype cycle. Source: Wikipedia

    But whilst we continue to see organisations delve deeper into entrenching Hadoop into their landscape, it is mindful to remember that this is not new technology.

    Hadoop was born out of Google in 2005 as Google was one of the first organisations to experience the data explosion that only today most other organisations are experiencing. The rest is history and has been written about many times over as Google went on to develop the Google File System (GFS) and MapReduce. These two technologies were then used to crawl, analyse and rank the billions of web pages into a result set that we all see at the front end of the Google interface every time we search.

    Then Apache got on board and what was produced was Apache Hadoop which had at it’s core HDFS (based on GFS) and MapReduce amongst an array of other capabilities.

    Over the years we have seen Hadoop evolve into this ecosystem of open source technologies with a wide range of amusing names such as Oozie, Pig, Hive, Spark and zookeeper. But we have also seen it become mainstream and adopted by many organisations. We have also seen Teradata build the technology into it’s data ecosystem by developing the Unified Data Architecture and forming partnerships with Hortonworks and more recently MapR.

    But what’s interesting for us in the Hadoop field is that we can see that MapReduce which is a core component of the original Hadoop technology is not as important as it used to be.

    So for this blog I decided to look at 2 technologies that are more recent and promise to evolve the Hadoop journey and overcome the barriers we have encountered in the past. But bear in mind that even though we are just starting to see these technologies appear in the Enterprise, the concept and design of these are now a few years old. This goes to show that in the open source community it takes a while to go from concept to Enterprise.


    Great yet another catchy name! Hadoop loves large data sets. In fact the more you give it, the more it will revel in it’s duty. But it does suffer from full table scans each time you add more data. What that basically means is that as your data grows your analysis time gets longer and longer as you are constantly re-scanning large data sets.

    In fact many organisations I have spoken to have thought that they can use Hadoop for fast processing of large and growing datasets without knowing that as they grow their data, performance can suffer. So often the joke is that once you kick off a MapReduce job, you may as well go off and make a coffee, do your shopping, watch a movie and then come back to see if the job has been completed.

    Therefore the smart cookies over at Google came up with percolator. In essence, percolator is an incremental processing engine. It replaces the batch-based processing approach with incremental processing and secondary indexes. The result being that as you add more data, your processing times do not blow out as full table scans are contained.

    It is built on top of BigTable. BigTable is a multi-dimensional, sparse, sorted map table approach used in conjunction with Map/Reduce. The following table shows the multi layered approach of Percolator:

    Figure 2: Percolator architecture Google research


    Data science is the art form of exploring data and attacking it from different angles to get new and different insights. However MapReduce is built for organised processing of jobs. And the volume of coding and level of expertise required is intense. This approach is not suitable for the type of ad-hoc style analysis over large data sets as required for data scientists. Just like I highlighted above, MapReduce jobs aren’t particularly the fastest things in the world hence they don’t lend themselves to ad-hoc iterative exploration of large data sets.

    Once again the team at Google came up with Dremel. In fact it’s been around since 2006 and is used by thousands of users at Google, so it’s not really new technology per se, however it does represent the future. Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. Using a combination of multi-level execution trees and a columnar data architecture, it is capable of running a query over trillion row tables in seconds.

    The sheer scalability of Dremel is truly impressive with the ability to scale across thousands of CPU’s and petabytes of data. From the testing that Google has performed the results has demonstrated that it is about 100 times faster than MapReduce.

    In recent times Dremel has inspired the development of Hadoop expansions such as Apache drill and Cloudera Impala and expect them to become more and more prevalent within Enterprises as deployments of Hadoop become more advanced.

    So you may well ask is Hadoop finished? Well not really but it is evolving. It is adapting to the needs of modern day enterprises with speed of analytics a primary driver of these advancements.

    It is no surprise that Google has been a key driver in the incubation of these new techniques as it is our use of the internet that has given rise to the need for these approaches. We are creating more and more data everyday but at the same time we need to analyse the data at a faster rate. Hadoop was developed essentially in another era when the volume of data was smaller and the need for speed was lower.

    So we will continue to see new and wonderful ways to tackle the data problem and these will eventually make their ways into the products we use today.

    Ben Davis is a Senior Architect for Teradata Australia based in Canberra. With 18 years of experience in consulting, sales and technical data management roles, he has worked with some of the largest Australian organisations in developing comprehensive data management strategies. He holds a Degree in Law, A post graduate Masters in Business and Technology and is currently finishing his PhD in Information Technology with a thesis in executing large scale algorithms within cloud environments.

    The post Going Beyond Hadoop So Where to Next? appeared first on International Blog.

    Teradata Blogs Feed

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc