11/06/2011

Wine Party with Opus One

I'm not into Opus One, which is a premium wine more expensive than its value. One of my friends nevertheless asked me to buy it last month. I remember he asked me Kistler chardonnay before. Buying premium wine bottles is a bit embarrassing. It's like buying Louis Vuitton bags during a business trip in Paris.

Anyway, I got one bottle, and donated it to a wine party yesterday.
Five wine addicts started with Pertois Moriset Grand Cru Brut, then tried to blind wine tasting a usual.
My score was two points. Here is a memo for my memory.

  1. Opus One 2008, Cabernet Sauvignon 86%, Petit Verdot 8%, Merlot 4%, Cabernet Franc 1%, and Malbec 1% from Napa
  2. Chambolle-Musigny Premer Cru Les Noirots 2003, Pinot Noir  from Chambolle-Musigny
  3. Chateau Calon-Segur 2003,  Cabernet Sauvignon 60%,  Merlot30%, Cabernet Franc 10% from Saint-Estephe,
  4. Lisini Brunello di Montalcino 2005, from Montalcino,Toscana 

Guessing Pino Noir was so easy, then I got one point. Nrinello confused me so much. It tasted like thin but elegant French Bordeaux. I took it for Chateau Calon-Segur.  Opus one showed  traces of clove and dark chocolate. Then I scored the second correct guess. 

11/05/2011

Emergencce of Big Data found in Web 2.0

October 13, 2011
translated from  my Japanese version in Wireless Wire News

From the author’s understanding, during the past decade, "social media was born through information shared instantly via human networks, and likewise, the era of 'information socialization' has arrived, in which scattered information is extensively collected, assigned value, and provided." When Tim O'Reilly proposed the concept of "What is Web 2.0" in September 2005, his insight was fresh.
One of the concepts is a database that grows in conjunction with users. As the amount of user data increases, services are enhanced, pulling in more user data. When data exceeds critical mass, a service with great value is created, against which other companies cannot compete. Typical examples are various Google services. Data is an asset; the principle asset of competitive power. O'Reilly said that "Data is the Next Intel Inside" and how to design places where data is generated is important. He showed the direction of Internet services in the Web era.

Around this same time, former Google CEO Eric Schmidt used the word "cloud" to describe a large, global scale server group, which came into being about a year later on August 9, 2006. Then, two weeks later, the Amazon EC2 service was introduced on August 24. This was not due to simple happenstance. The iPhone was launched in the US the following year, on June 29, 2007, and smartphones emerged that provide services in collaboration with the cloud. The introduction of Android, which followed the iPhone, clarified the functions of the device, that is, it generates real global data and is a cloud device.
In short, as the concept that data is an important asset for corporate activities was shown by Web 2.0, SNS using accumulated data, and Internet services such as media accumulation, distribution, and search, etc., have advanced dramatically due to the emergence of the cloud, cloud devices, and large scale database processing technology. "Information socialization", in which public services are provided on a global scale by linking as many as 100 million computers, created the value-added resource called "Global Brain," to borrow from Tim OReillys term. Examples include Google voice recognition, machine translation, Facebook, and Twitter data analysis recommendations, etc. The situation in 2011 can be expressed using the following formula.

Professor Maruyama, chairman of Japan Android Group, described the size of data accumulation and processing happening on a global scale as "Web-Scale" (2009).1
What is Web-Scale data? At this time, Web-Scale data includes server logs, sensor information, images/video, SNS data, blogs, and social graphs from Twitter and Facebook, etc. I call those items “Big Data.” In general, the characteristics of such data are that they are large-scale, their structure is not constant 2, and a quick response is required. Furthermore, much of the data has a historical meaning and thus, in many cases, it cannot be thinned out. The challenge is how to process Big Data. There are two aspects to this: algorithms and systems.
1. If it is not Web-Scale, it is not a cloud. A cloud is a system technology or platform that supports the Web-Scale explosion of information and expansion of users, and simply calling the data warehouse of a company a "private cloud" cannot be considered to be grasping its true nature.
2. Because the structure is not constant, one idea is to use NoSQL. However, since data modeling allows data to be handled as structured data, I think it is proper to handle data in SQL. Also, it is necessary to use NoSQL+Hadoop when you want statistical data, and to use SQL by placing importance on consistency when you want to reproduce data itself. I think the spread of Hadoop will depend on how popular the statistical use of Big Data becomes.


Designing the "place" for Big Data collection as part of a service
Machine learning technology that automatically learns useful rules and statistical models from data, as well as pattern recognition technology for identifying data from acquired rules or statistical models, have been researched to date. Pattern recognition researchers were interested in methodology and algorithms themselves, such as how to convert voice data to text, how to automatically enter handwritten text into a computer, and how to automatically follow images of the human face. They could easily write research papers if they conceived a good algorithm and simply conducted experiments implementing real data.
Until 2005, there was no Big Data. However, after 2006, the time came when people began trying to create and improve services by applying machine learning and pattern recognition to Big Data. You can look at the success of Google. There have been many case examples of "More Data beats Better Algorithms (MDbBA)". Google's auto driving demonstration is one good example. Without relying on combinations of complicated algorithms, the demonstration showed that a car was able to automatically drive from San Francisco to Los Angeles using collected map data and combinations of distance measurements and image sensors.
Automatic driving is an example of a service that exceeds a critical point involving machine learning when there is sufficient data. That being said, there are also many successful case studies for introducing machine learning frameworks. Machine learning is a framework in which the system, when there is correct data, automatically makes adjustments in order to obtain the appropriate answers. Therefore, if good learning algorithms are designed for a given problem class--in other words, a service--the data correction and performance improvement loops correctly.
Such machine learning frameworks are carried out by character recognition, voice recognition, machine translation, landmark recognition, or facial recognition and are provided as Internet services. For example, the level of machine translation has almost reached a practical usage level between related language families, such as English, French and Spanish. There is still room for improvement in translation algorithms, and this is a good opportunity for publishing research papers. But the important thing is to actually design a location for Big Data collection in which the machine learning framework is incorporated into part of the service. We must be aware that the pattern recognition study environment has changed significantly in the past decade.


Finding what comes next in facial recognition
In 2001, at a facial recognition conference, Paul Viola and Michael Jones gave a presentation about an object detector called boosting. The announcement of this algorithm was the moment when the service called facial identification entered the area of "More Data beats Better Algorithms (MDbBA)". After that, there were many algorithm improvements to greatly improve performance, but facial region tracking used in digital cameras was based on the method used in this announcement.
Researchers are requested to do two things.
1. Develop the next MDbBA area by inventing new algorithms and methods. Become the second Viola and Jones.
2. In the MDbBA area, study platforms for Big Data processing, in addition to algorithms. Engineering is designing both the algorithm and the platform.
IT engineers and entrepreneurs are requested to do the following:
Be the first to find out what is ready for commercialization in the MDbBA area and use it as a device for Internet service.
Silicon Valley is not the only outlet for Internet service innovation. The advent of the cloud, database processing technology, and cloud devices gave Internet design opportunities to all people. In general, it is important to think about devices that collect Big Data, but this is not limited to pattern recognition applications, since there are pattern recognition technologies scattered throughout Japan. By all means, I want them to adjust their focus.
The following five topics can be highlighted:
1. What is the next area for Big Data beats Better Algorithms? Achievement of character, voice, and facial recognition, and then the next thing is food? How about predicting the weather or consumer behavior?
2. To what degree can the latest algorithms, such as Bayesian modeling, scale for Big Data?
3. What are the facts and fictions about Big Data? How effective is it for social networking analysis?
4. How popular can Hadoop and NoSQL become?
5. What will the Global Brain look like in 10 years?
Big Data is used in a wide range of areas, including marketing, financial security, social infrastructure optimization, and medical care. This conference cannot possibly cover them all. However, above all, we would like you to focus on this conference as a place for exchange between pattern recognition researchers and the IT industry.

Minoru Etoh, Ph.D.
Director at NTT DOCOMO, Service & Solution Development Department, and President & CEO of DOCOMO Innovations, Inc. (Palo Alto, CA). Visiting 
Professor at Cybermedia Center Osaka University. He has been engaging in research and development in the multimedia communication and mobile network field for 25 years. He entered Matsushita Electric Industrial Co., Ltd. (currently Panasonic Corporation) in 1985 and researched moving image coding at the Central Research Laboratory and pattern recognition for ATR. He entered NTT DOCOMO in 2000 and wasCEO of DOCOMO USA Labs (Silicon Valley) from 2002 to 2005. Currently, he is in charge of development related to data mining, media understanding, smartphones, home ICT, machine communication, and information search.