Tag Archives: NoSQL

WEBINAR: Real-Time Data Ingestion and Streaming Analytics with Oracle and Hortonworks

November 24th, 2016 at 10am BST / 11am CEST

REGISTER HERE

Organizations today are looking to exploit modern Data Architectures that combine the power and scale of Big Data Hadoop platforms with operational data from their Transactional Systems. In order to react to situations in an agile manner in real-time, low-latency
access to data is essential.

Hortonworks and Oracle can provide comprehensive solutions that allow organisations to respond rapidly to data events.

https://i2.wp.com/hortonworks.com/wp-content/uploads/2014/09/Oracle_MDA.png

REGISTER HERE

During this webinar we will cover:

  • How Oracle GoldenGate empowers organisations to capture,route, and deliver transactional data from Oracle and non-Oracle databases for ingestion in real-time to HDP :registered:.
  • How GoldenGate complements HDF providing optimised delivery to Hadoop targets such as HDFS, Hive, Hbase, NoSQL, Kafka, to support customers with their real-time big data analytics initiatives.
  • How Oracle GoldenGate enables uses cases for analysis of data in motion

Attend this webinar to learn how Hortonworks and Oracle can help you with your real-time big data analytics and streaming initiatives!

REGISTER HERE

Advertisements

Big Data – debunking some of the myths and unlocking value

London 19 March 2015 – techUK

Embarcadero invites you to a breakfast briefing, which will examine some complexity around Big Data and what it really means. Many vendors claim almost magical capabilities for their ‘Big Data’ products, but the reality is that successful analysis comes down to understanding your algorithms and understanding your data. Learn how data modelling is integral to this. Chris’s presentation will be followed by an overview of Embarcadero’s support for Big Data and a short demonstration of this capability.

Register Now

Guest speaker Chris Swan is CTO at Cohesive Networks, a cloud networking company that he joined in 2013. He has spent much of his career in financial services as CTO at UBS and CapitalSCF, and in various R&D, architecture and engineering positions at Credit Suisse. Prior to that Chris was a Combat Systems Engineering Officer in the Royal Navy. Chris is cloud editor for InfoQ. He has been an independent mentor in the London FinTech Innovation Lab since its founding, and is also an advisor to a number of London-based tech and fintech start-ups.chris_swan

Join us at techUK on 19th March, 2015 to find out more. The event will be held at the Tech UK offices in St Bride Street, for more details see http://www.techuk.org/

Agenda:
8:30 – Registration and breakfast, tea and coffee served
9:00 – Briefing
10:00 – Q&A

London
19 March 2015
techUK
10 St Bride Street
London EC4A 4AD

Register Now

Article: Schema on Read?

Great Article by

Source: http://www.techopedia.com/definition/30153/schema-on-read

Definition – What does Schema on Read mean?

Schema on read refers to an innovative data analysis strategy in new data-handling tools like Hadoop and other more involved database technologies. In schema on read, data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes in.

Techopedia explains Schema on Read

Older database technologies had an enforcement strategy of schema on write—in other words, the data had to be applied to a plan or schema when it was going into the database. This was done partially to enforce consistency of data, and that is one of the major benefits of schema on write. With schema on read, the persons handling the data may need to do more work to identify each data piece, but there is a lot more versatility.

In a fundamental way, the schema-on-read design complements the major uses of Hadoop and related tools. Companies want to effectively aggregate a lot of data, and store it for particular uses. That said, they may value the collection of unclean or inconsistent data more than they value a strict data enforcement regimen. In other words, Hadoop can accommodate getting a wide scope of different little bits of data that might not be completely organized. Then, as that information is used, it gets organized. Applying the old database schema-on-write system would mean that the less organized data would probably be thrown out.

Another way to put this is that schema on write is better for getting very clean and consistent data sets, but those data sets may be more limited. Schema on read casts a wider net, and allows for more versatile organization of data. Experts also point out that it is easier to create two different views of the same data with schema on read.

This schema-on-read strategy is one essential part of why Hadoop and related technologies are so popular in today’s enterprise technology. Businesses are using large amounts of raw data to power all sorts of business processes by applying fuzzy logic and other sorting and filtering systems involving corporate data warehouses and other large data assets.

Posted by:
Source: http://www.techopedia.com/definition/30153/schema-on-read

EVENT: Data, Big Data, Enterprise Data – how businesses can use modelling to get the best value from it

Find out how Data Modelling Techniques for Big Landscapes, Big Models, Big Data and Big Teams allow organisations to exploit Enterprise Data to gain the most benefit for the business. Data governance offers the only strategic solution to the challenges faced by companies due to the significant growth in data volume, diversity and complexity. And, by providing greater insight into the location, meaning and proper use of enterprise data, organisations can improve corporate data compliance and utilisation.

Matthew Basoo will discuss how Validus Group has faced this challenge head-on and present their key recommendations.

Speakers: Matthew Basoo, Group BI Design Authority, Validus Group and Mark Barringer, Product Manager, Embarcadero

London
23 October 2014

techUK
10 St Bride Street
London EC4A 4AD
Registration, Coffee and Breakfast: 08:30am
Briefing: 09:00am – 10:00am

REGISTER NOW

 

Webinar: Data Modeling for Big Data & NoSQL Technologies with Karen Lopez

Since data modeling became a mainstream development technique, our work has focused primarily on data modeling for relational databases. Then came data warehouse and dimensional modeling. Now we have both transactional and analytical non-relational solutions to support as well. Where does data modeling fit into these projects? Where do models come into play if they are rooted in relational notations and processes?

In this webinar, Karen Lopez of InfoAdvisors will cover 10 tips for the modern data architect and resources for coming up to speed on these new approaches. She will share how modern data modeling approaches address both SQL (relational) and NoSQL technologies. We’ll look at the role of a data modeler, and how models, processes and data governance processes can add value to enterprise big data and NoSQL development projects.

Register Now

Tuesday 26th August 2014 11AM PDT / 1PM CDT / 2PM EDT

About the Presenter:
Karen Lopez is the Senior Project Manager and Architect at InfoAdvisors. She has more than twenty years of experience in helping organizations implement large, multi-project programs. She is a MVP, SQL Server, and a Dunn and Breadstreet MVP.

InfoAdvisors is a Toronto-based data management consulting firm. We specialize in the practical application of data management. Our philosophy is based on assessing the cost, benefit, and risk of any technique to meet the specific needs of our client organizations.

 

VIDEO: Resurrection of SQL with Big Data and Hadoop

Did you really think that SQL was going away? Attend this session to learn how SQL is a vital part of the next generation of data environments. Find out how you can use your existing SQL tools in the big data ecosystem.

Oz Basarir is the product manager of Embarcadero’s database management and development tools. Having worked over the last two decades with databases at a spectrum of companies as small as his own to as large as Oracle and SAP, he has an appreciation for diversity of the data ecosystems as well as for the tried-and-true languages (SQL).

Learn more about DBArtisan and try it free at http://embt.co/DBArtisan
Learn more about Rapid SQL and try it free at http://embt.co/RapidSQL

Resurrection of SQL with Big Data and Hadoop
by Oz Basarir – Embarcadero

Oz Basarir

See more Data U Conference session replays and download slides at http://embt.co/DBDataU

Article: Perspective and preparation, Data modeling concepts still vital in business

Is data modeling outdated? This excerpt from the book Data Modeling for MongoDB: Building Well-Designed and Supportable MongoDB Databases by Steve Hoberman argues that data modeling concepts are still vital to business success and introduces useful terminology and tips for simplifying a complex information landscape with MongoDB applications. Hoberman is the most requested data modeling instructor in the world and has educated more than 10,000 people across five continents about data modeling and BI techniques. In this excerpt, he emphasizes the necessity for businesses to implement data modeling concepts and explores a variety of business uses for data models.

View Article Now

By Steve Hoberman

 Copyright info

This excerpt is from the book Data Modeling for MongoDB: Building Well-Designed and Supportable MongoDB Databases, by Steve Hoberman. Published by Technics Publications, LLC, Basking Ridge NJ, ISBN 9781935504702. Copyright 2014, Technics Publications.