Tag Archives: Big Data

Why Self Service Data Wrangling Speeds Up Time to Value

By Sandrine Riley-Oracle on Sep 09, 2016

Oracle’s Big Data Preparation Cloud Service (BDP) provides value in analytics and data management projects at any scale. It empowers business users to process complex business data of any source, size and format; from small departmental data to large enterprise data to massive IOT and log data. The service is the only data wrangling tool using a unique combination of Machine Learning, Natural Language Processing leveraging a semantic knowledge graph in the Cloud. This means that it is more efficient in mapping relationships and making more accurate repair and enrichment recommendations. Are you curious?  Check out this short BDP Video to have BDP explained to you!

Image result for big data prep oracle

It is becoming more evident that Data Preparation is important in speeding time to value.  Due to growing data volumes, and siloed data, businesses are finding that further and faster growth can be achieved via better data, of which one preliminary step is that of preparing, enriching, and wrangling the data.  With the help of Forrester, 160 IT decision makers from around the world were surveyed, which yielded great clues and information on the growing importance of streamlining the data preparation and data to deliver cutting-edge business insights.

READ the Technology Adoption Profile:  Data Preparation Accelerates Self-Service.

Oracle’s cloud based technology with Oracle Big Data Preparation helps to bridge the IT-Business gap, showing how self service data wrangling, when done right imparts great value, provides rich recommendation and helps streamline and automate the data preparation pipeline.  Oracle Big Data Preparation Cloud Service provides an agile, intuitive interface that automates, streamlines, and guides the process of data ingestion, preparation, enrichment, and publishing of data targeted at the data integration needs of the data steward and IT.

To learn more about Oracle Big Data Preparation Cloud Service, visit us at our websites here and here.  We hope you find this research compelling!

Original Post

About the Author:

Sandrine Riley

Product Management – Data Integration Solutions at Oracle

Sandrine Riley

What is Your Big Data Story?

Check out this blog post by Karen Lopez and then let us know your Big Data Story. http://embt.co/1yPSjMe

Extract:

I’ve been speaking, teaching and ranting on big data and NoSQL technologies recently. I’ve noticed that when I chat with many data modelers, I’ve met with a lot of skepticism about big data.

You may be that “guy” if you’ve ever said: • It’s just mainframe all over again.

• It’s a fad to get out of data modeling tasks

• It’s not something I need know about

• I don’t have big data, so I don’t need to know NoSQL

I think what’s missing from that thinking is the fact that modern data architectures use technologies that are the best fit for solving a data “problem”. I like to use the term “data story” instead of “problem.” It’s not that the newer technologies are replacing traditional relational database systems; they are complementing them.

Continue Reading here: http://embt.co/1yPSjMe

Article: Schema on Read?

Great Article by

Source: http://www.techopedia.com/definition/30153/schema-on-read

Definition – What does Schema on Read mean?

Schema on read refers to an innovative data analysis strategy in new data-handling tools like Hadoop and other more involved database technologies. In schema on read, data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes in.

Techopedia explains Schema on Read

Older database technologies had an enforcement strategy of schema on write—in other words, the data had to be applied to a plan or schema when it was going into the database. This was done partially to enforce consistency of data, and that is one of the major benefits of schema on write. With schema on read, the persons handling the data may need to do more work to identify each data piece, but there is a lot more versatility.

In a fundamental way, the schema-on-read design complements the major uses of Hadoop and related tools. Companies want to effectively aggregate a lot of data, and store it for particular uses. That said, they may value the collection of unclean or inconsistent data more than they value a strict data enforcement regimen. In other words, Hadoop can accommodate getting a wide scope of different little bits of data that might not be completely organized. Then, as that information is used, it gets organized. Applying the old database schema-on-write system would mean that the less organized data would probably be thrown out.

Another way to put this is that schema on write is better for getting very clean and consistent data sets, but those data sets may be more limited. Schema on read casts a wider net, and allows for more versatile organization of data. Experts also point out that it is easier to create two different views of the same data with schema on read.

This schema-on-read strategy is one essential part of why Hadoop and related technologies are so popular in today’s enterprise technology. Businesses are using large amounts of raw data to power all sorts of business processes by applying fuzzy logic and other sorting and filtering systems involving corporate data warehouses and other large data assets.

Posted by:
Source: http://www.techopedia.com/definition/30153/schema-on-read

EVENT: Data, Big Data, Enterprise Data – how businesses can use modelling to get the best value from it

Find out how Data Modelling Techniques for Big Landscapes, Big Models, Big Data and Big Teams allow organisations to exploit Enterprise Data to gain the most benefit for the business. Data governance offers the only strategic solution to the challenges faced by companies due to the significant growth in data volume, diversity and complexity. And, by providing greater insight into the location, meaning and proper use of enterprise data, organisations can improve corporate data compliance and utilisation.

Matthew Basoo will discuss how Validus Group has faced this challenge head-on and present their key recommendations.

Speakers: Matthew Basoo, Group BI Design Authority, Validus Group and Mark Barringer, Product Manager, Embarcadero

London
23 October 2014

techUK
10 St Bride Street
London EC4A 4AD
Registration, Coffee and Breakfast: 08:30am
Briefing: 09:00am – 10:00am

REGISTER NOW

 

VIDEO: Superior Data Modeling Techniques for Teradata users

Taking your Teradata Database to the Next Level

As a Teradata customer, you already know that managing your data is a big job. Perhaps you’ve been wishing for a database tool to help you do that more efficiently. Your wish has been granted!

ER/Studio includes first-tier support for Teradata, enabling data professionals to better manage their data warehouses from logical and physical models. Teradata is recognized as a leading provider in very large scale mission critical enterprise data warehouses, and their customers have come to rely on these features to obtain best functionality and performance.

Watch this 10 minute Teradata video on demand to see how ER/Studio can help you:

  • Use ALTER scripts to make modifications to a Teradata database
  • Leverage MLPPI to improve the performance of certain queries and high-volume insert, update and delete operations
  • Manage your historical data with temporal data types

With a dedicated approach to helping data management professionals raise the bar, ER/Studio can help save you time with powerful data modeling and architecture tools to meet the evolving demands of today’s data warehouse environments.

 

VIDEO: Resurrection of SQL with Big Data and Hadoop

Did you really think that SQL was going away? Attend this session to learn how SQL is a vital part of the next generation of data environments. Find out how you can use your existing SQL tools in the big data ecosystem.

Oz Basarir is the product manager of Embarcadero’s database management and development tools. Having worked over the last two decades with databases at a spectrum of companies as small as his own to as large as Oracle and SAP, he has an appreciation for diversity of the data ecosystems as well as for the tried-and-true languages (SQL).

Learn more about DBArtisan and try it free at http://embt.co/DBArtisan
Learn more about Rapid SQL and try it free at http://embt.co/RapidSQL

Resurrection of SQL with Big Data and Hadoop
by Oz Basarir – Embarcadero

Oz Basarir

See more Data U Conference session replays and download slides at http://embt.co/DBDataU

Article: Perspective and preparation, Data modeling concepts still vital in business

Is data modeling outdated? This excerpt from the book Data Modeling for MongoDB: Building Well-Designed and Supportable MongoDB Databases by Steve Hoberman argues that data modeling concepts are still vital to business success and introduces useful terminology and tips for simplifying a complex information landscape with MongoDB applications. Hoberman is the most requested data modeling instructor in the world and has educated more than 10,000 people across five continents about data modeling and BI techniques. In this excerpt, he emphasizes the necessity for businesses to implement data modeling concepts and explores a variety of business uses for data models.

View Article Now

By Steve Hoberman

 Copyright info

This excerpt is from the book Data Modeling for MongoDB: Building Well-Designed and Supportable MongoDB Databases, by Steve Hoberman. Published by Technics Publications, LLC, Basking Ridge NJ, ISBN 9781935504702. Copyright 2014, Technics Publications.