Users Online
· Guests Online: 152
· Members Online: 0
· Total Members: 188
· Newest Member: meenachowdary055
· Members Online: 0
· Total Members: 188
· Newest Member: meenachowdary055
Forum Threads
Newest Threads
No Threads created
Hottest Threads
No Threads created
Latest Articles
Articles Hierarchy
13 IoT and Big Data
IoT and Big Data
Earlier when introducing the new M2M paradigm we said that a new mindset and approach was required when considering the best techniques and technologies for implementing IoT. This is certainly true when we come to consider data storage, for we need to revise how we store the data in new styles of databases. The reason we require new database models is that data requirements have evolved from data to big data. What
that means is that data management now requires the handling of huge amounts of raw or unstructured data. As a result, the model must be scalable, flexible and capable of realtime analysis of aggregations of data to provide a unified view.
The requirement then is to have different types of databases, we will require structured databases such as SQL (Oracle, MySQL, IBM etc) but we will also require databases that can handle big raw unstructured data such as MongoDB and Cassandra for providing unified views, agility and heterogeneity. The core advantage of Big Data is that it provides predicative analysis on all the data, not just a subset of the deemed interesting data as in SQL. This unified view of all the aggregated data in the database allows Big Data databases to find patterns, correlations and hidden insights that would not be possible with SQL.
The idea behind big data is that everything we do produces some digital trail, some activity data is always left behind. For example, even reading an eBook or watching an online video or music will result in us leaving a digital trail, as will our purchases in stores if we use bankcards. Indeed, we cannot even walk through the store without our images being recorded by CCTV and stored somewhere in a database. This is even before we
consider our interactions on email or social media where everything we write is stored somewhere. Big Data software can access distributed databases in order to aggregate and perform predictive analysis on this vast collection of data.
The components and functionality of Big Data is best illustrated using the four V’s:
Volume – refers to the vast amounts of data generated and stored, Big Data tools allow us to aggregate data from distributed data sources and perform predictive analysis
Velocity – Big Data technology can perform in-memory real-time analysis of data as it is generated
Variety – refers to the types of data we can use. 80% of the world’s data is unstructured (video, voice, text, images) and so doesn’t fit the SQL pre-determined neat structured model of database. Big Data deals with unstructured data so can handle any type of data from sensors just as well as text messages and voice
recordings.
Veracity – Of course handing unstructured data is messy and sometimes untrustworthy, as it doesn’t comply with a predetermined structure. Big Data technology however still allows us to work with this type of data
The value of Big Data comes from this ability to work with cloud computing and distributed systems to aggregate data and perform analysis on the unified views of the entire dataset that can unearth hidden insights and correlations.
Big Data’s ability to perform predictive in-memory analysis on vast quantities of data makes it a perfect fit for IoT. Indeed, along with cloud computing and distributed systems Big Data is one of the underpinning technologies of IoT. Another key component is not just how we store and manipulate data within IoT but how we transport it in the first place, and this is where communication protocols become important.
Earlier when introducing the new M2M paradigm we said that a new mindset and approach was required when considering the best techniques and technologies for implementing IoT. This is certainly true when we come to consider data storage, for we need to revise how we store the data in new styles of databases. The reason we require new database models is that data requirements have evolved from data to big data. What
that means is that data management now requires the handling of huge amounts of raw or unstructured data. As a result, the model must be scalable, flexible and capable of realtime analysis of aggregations of data to provide a unified view.
The requirement then is to have different types of databases, we will require structured databases such as SQL (Oracle, MySQL, IBM etc) but we will also require databases that can handle big raw unstructured data such as MongoDB and Cassandra for providing unified views, agility and heterogeneity. The core advantage of Big Data is that it provides predicative analysis on all the data, not just a subset of the deemed interesting data as in SQL. This unified view of all the aggregated data in the database allows Big Data databases to find patterns, correlations and hidden insights that would not be possible with SQL.
The idea behind big data is that everything we do produces some digital trail, some activity data is always left behind. For example, even reading an eBook or watching an online video or music will result in us leaving a digital trail, as will our purchases in stores if we use bankcards. Indeed, we cannot even walk through the store without our images being recorded by CCTV and stored somewhere in a database. This is even before we
consider our interactions on email or social media where everything we write is stored somewhere. Big Data software can access distributed databases in order to aggregate and perform predictive analysis on this vast collection of data.
The components and functionality of Big Data is best illustrated using the four V’s:
Volume – refers to the vast amounts of data generated and stored, Big Data tools allow us to aggregate data from distributed data sources and perform predictive analysis
Velocity – Big Data technology can perform in-memory real-time analysis of data as it is generated
Variety – refers to the types of data we can use. 80% of the world’s data is unstructured (video, voice, text, images) and so doesn’t fit the SQL pre-determined neat structured model of database. Big Data deals with unstructured data so can handle any type of data from sensors just as well as text messages and voice
recordings.
Veracity – Of course handing unstructured data is messy and sometimes untrustworthy, as it doesn’t comply with a predetermined structure. Big Data technology however still allows us to work with this type of data
The value of Big Data comes from this ability to work with cloud computing and distributed systems to aggregate data and perform analysis on the unified views of the entire dataset that can unearth hidden insights and correlations.
Big Data’s ability to perform predictive in-memory analysis on vast quantities of data makes it a perfect fit for IoT. Indeed, along with cloud computing and distributed systems Big Data is one of the underpinning technologies of IoT. Another key component is not just how we store and manipulate data within IoT but how we transport it in the first place, and this is where communication protocols become important.
Comments
No Comments have been Posted.
Post Comment
Please Login to Post a Comment.