/home/leansigm/public_html/components/com_easyblog/services

SigmaWay Blog

SigmaWay Blog tries to aggregate original and third party content for the site users. It caters to articles on Process Improvement, Lean Six Sigma, Analytics, Market Intelligence, Training ,IT Services and industries which SigmaWay caters to

Data Mining- An Overview

Data mining is the practice of examining large pre-existing databases in order to generate new information or to predict future trends. It is useful because: 1) it helps in identifying the multi-dimensional patterns of large data, 2) establishing relationships, 3) putting together predictive models, 4) customer data can be mined to acquire new customers, retain customers, cross-sell to existing customers, 5) helps financial sector companies build fraud detection models and risk mitigation models, etc. Skills required for data mining are statistical knowledge, artificial intelligence/machine learning, and competency in Python, R, and SQL; also along with these technical skills a data miner should have business knowledge and other soft skills. Read more at: http://www.datasciencecentral.com/profiles/blogs/data-mining-what-why-when

 

Rate this blog entry:
2761 Hits
0 Comments

Banks Depend on Data

Organizations need databases which are needed to store data safely. Through databases one can solve problems from NoSQL and RDBMS framework to in-memory databases. This help banks to give quicker reaction times and viable examination, prompting better client experience and maintenance. Utilizing a center layer on top of various databases, banks can rapidly assemble information. For more read the article written by Nanda Kumar(CEO, SunTec Business Solution) : https://www.finextra.com/blogposting/12478/making-data-work-for-banks

 

Rate this blog entry:
3389 Hits
0 Comments

Big data is good for database

Mention database and what all things will come to your mind? Data management, storage, tables, RDBMS… But this trend is changing. The reason for the change is big data. Data has become so big that conventional methods and tools are not enough to manage this data. So what to do? We need some database technologies which can deal with big data. Hadoop is the most popular technology which can tackle big data. It is a data-centric platform which is highly scalable and used to run applications in parallel. Another alternative to RDBMS is the use of NoSQL platforms like MongoDB. Read the full article here: http://www.infoworld.com/article/3003647/database/how-big-data-is-changing-the-database-landscape-for-good.html

Rate this blog entry:
4350 Hits
0 Comments

Big Data on Organ Transplant Market

With more than 120,000 people in need of organ transplants and a shortage of donors, economists, doctors and mathematicians are using data to save lives. On a very basic level, the organ transplant process can be separated into two categories: organs taken from living donors and organs harvested from deceased donors. From living donors, doctors can take one of a person's two kidneys, as well as part of his or her liver. From a deceased donor, doctors are able to extract a cadaver's kidneys, liver, heart, lungs, pancreas, intestines and thymus. Of the organs donated in 2013, roughly 80% came from deceased donors, according to UNOS. While it's preferable to receive a kidney from a living donor, the donors and candidates are incompatible in approximately one-third of potential kidney transplants because of mismatched blood or tissue types. In the case of incompatibility, a candidate is placed on what's commonly referred to by the public as a "waiting list".  UNOS receives information from both the candidate and the deceased donor to establish compatibility such as blood type, body size and thoracic organs, like the heart and lungs, need to be transplanted into a similarly-sized recipient and geography as it seeks to match candidates locally, regionally and then nationally. With that data, UNOS' algorithm rules out the incompatible. It then ranks the remainder based on urgency and geography. For example, a liver made available in Ohio would theoretically go to the closest compatible candidate with the highest MELD score. 

In 2010, UNOS launched its Kidney Paired Donation Program that used Sandholm and his team's algorithm. So far, the program has matches have resulted in 97 transplants, with more than a dozen scheduled in the coming months. To read in detail visit: http://mashable.com/2014/07/23/big-data-organ-transplants/

Rate this blog entry:
5113 Hits
0 Comments

Big Data in good governance

Intelligent processing of data is essential for addressing societal challenges. Data could be used to enhance the sustainability of national health care systems and to tackle environmental problems by, for example, processing energy consumption patterns to improve energy efficiency or of pollution data in traffic management. 

To make the best use of big data now and in the future, government must have the right infrastructure in place. It advocates a data force, based on the successful nudge unit, to access data from different departments and identify where savings could be made. New data science skills will be needed across government, but it is more important is ensuring that public services leaders are confident in combining big data with sound judgment.

Closely linked to government's drive to make better use of big data, is its drive to make data open. Open data, particularly that in the public sector, is often big data – for example, the census, information about healthcare or the weather – and making large government datasets open to the public drives innovation from within government and outside. See more at: http://fcw.com/Articles/2013/09/25/big-data-transform-government.aspx?Page=1

 

Rate this blog entry:
5429 Hits
0 Comments

Google’s Cloud Platform Gets Improved Hadoop Support

Long ago, Google has integrated Hadoop on its cloud platform to provide customers a framework for storing and processing large amount of data. Earlier, the only way to get in and out of Hadoop on Cloud Platform was through Google’s Cloud Storage service. In order to run Hadoop on data within its “BigQuery SQL” and “Cloud Datastore NoSQL” databases, recently Google has launched connectors for both of these two products, complementing the existing Cloud Storage implementation.

To read more, visit the following link:

 

http://techcrunch.com/2014/04/16/googles-cloud-platform-gets-improved-hadoop-support-with-bigquery-and-cloud-datastore-connectors/

Rate this blog entry:
15386 Hits
0 Comments

Mongo DB 2.6 hits general availability - so what's in it?

Mongo DB announces its latest open source NoSQL database release 2.6. This release introduces greater automation features like incremental backup, point-in-time recovery, monitoring, visualization and alerts on more than 100 parameters in order to improve provisioning and management. Users can access network resources more efficiently along with an integrated text search in 15 languages with the help of Mongo DB query language. To know more about the features and efficiency of this release, follow the article of Toby Wolpe, senior reporter at ZDNet in London:

http://www.zdnet.com/mongodb-2-6-hits-general-availability-so-whats-in-it-7000028148/

Rate this blog entry:
6340 Hits
0 Comments
Sign up for our newsletter

Follow us