Why Virtual Reality Hasn’t Gone Big Yet?

Why is VR Still A Virtual Dream?

 

“This is a TV that you can strap to your face to make you think you’re somewhere else. Here, try it. I’ll hold your drink.”

This is what virtual reality promised us.

Some would say that it is yet to take off and its progress has been real slow. We got pondering about the same and have come up with a few reasons that justify the same.

 

virtual-reality-marketing

 

1. Hardware limitations. Remember that VR devices have double screens, one per eye. Now add a monitor with a relatively high resolution. This is the equivalent of rendering in three screens. On top of that, you need a steady 60 fps or more at least. Add the inherent latency between input devices or controllers, and in some cases like the VIVE the player’s position tracking, and most computers will struggle.

2. As a developer you reduce your market size greatly, as now you need a computer with VR capable specs and a lot more power on top of it for the game itself.

3. There may be projects already working on this, but those will not be indie developers. These projects usually take 2+ years for completion. Given VR devices hit the consumer market about a year ago, there is still some time.

4. Notable exceptions are AAA games that were or are being ported to VR, such as Fallout 4 or Resident Evil 7, but these were games successful on their own, so the companies are not taking a major risk with the ports.

 

To Summarise it up, “ Photo realistic VR experiences take a considerable amount of money to pull off. Firstly, you need a high enough resolution screen, and what we have is about 1/10th the resolution of where we need to be. Secondly, along with the higher resolution screen you’d need to be able to push a realistic graphics simulation to a headset at 90+fps consistently, otherwise you run the risk of ruining the immersion.

The hardware just isn’t there yet, as there isn’t a graphics card in existence today with the horsepower and driver software to pilot something as crazy as photo-realistic VR.”

The hardware requirements for a given level of graphics are much higher for VR than they are for a single monitor (or even a triple monitor). To put this in perspective, for VR you need to render probably 5x as many raw pixels as you do for the output of a current console at 1080p, you need to do additional post processing on them when you’re done, and you need to do it three times as fast (90+ vs 30fps). S

o you’re talking about (in an oversimplified way) 10 to 15x hardware requirements for the same content vs a console fame. And you can never have dropped frames or stutter (things like Time Warp mitigate the impact but it’s still critical). So this right away means you won’t be able to use the same poly counts and do the same shaders – the secret sauce – that make modern games look amazing.

The total VR market right now is in the low single digit millions across all platforms (Vive, Rift, PSVR) and it’s tricky to do cross-platform yet. So you need to pay for your development with 1/50th or less the potential audience you have with a conventional game. That means you’re not going to have $100m budgets and the offices full of texture artists, shader makers, and modelers required to build the detailed worlds you see in big AAA titles. So if you want to make a game, you need to focus your limited budget on the kind of titles and content that you can execute well with the team your market size will support.

And that doesn’t tend to be the photo-realistic, precisely-rendered environments you can do with big budgets and lower pixel/frame rate requirements.

VR will arrive and make it big soon, just not now.

Give it some time.

3 Must Know Languages For Machine Learning

Language For The Machines

 

Machine learning is all about making a machine capable of learning codes and producing them automatically by analyzing the input data. This process includes the development of AI-enabled algorithms which help machine learn and produce codes. So, in a way we can say that machine learning is a part of AI (Artificial intelligence) which has been very viably used in many fields, like math, psychology, etc.

Well, this is just an intro; a more technicalities are still hidden far behind the curtains. The biggest one being learning of a machine language because only a good programmer can make use of all tools and bring out the best in the job.

So, in this article, we shall discuss about the 3 must-know languages which a programmer should know while dealing with machine learning.

 

  1. Python

Python is the highly flexible and multi-purpose language in nature. Due to its these features, it has gained a lot of popularity among the developers, programmers and data scientists. This language has its own libraries for the purpose machine learning – Numpy and Scipy. These two libraries are enough to learn about the Linear Algebra and Kernel methods of the phenomenon of machine learning. The biggest ease with this language is that- it has easy syntax. Easy syntax, in turns, makes the whole machine learning process easy and understandable. Those who want to excel in machine learning, should start with python.

 

3 Must Know Languages For Machine Learning

 

  1. C Language

Developed by Denise Ritchie, C is the mother of all languages. Therefore, if you are thinking of building a predictive algorithm, this language will help you superficially. But as said, this language is the mother of all other programming language and hence, learning this would not be a cakewalk. To get started with C, you will have to have great fundamentals of basic C and its syntax. However, having mastery in this language does not mean to hold a PhD. It simply means to have strong concepts and clear fundamentals. Also, once you are nicely through the learning of C language, you can even give a try to other functional languages like Erlang, Scala, Julia, and Haskell.

 

  1. R Language

R language is the modern version of the S language developed by the Bell Labs. This language, when combined with lexical scooping, helps in enhancing the flexibility of statistical models. In terms of machine learning, it is one of the strongest language to master. In this language, many GNU packages are available. This language can be used for creating the powerful algorithms and giving statistical visualization of those created algorithms. Currently, R language in very popularly used in educational industry, but soon it will exact famous in other fields as well.

So, as you can see the 3 most powerful languages for machine learning, it is time for you to decide which language you would start with. It would be a recommendation to start with C, which is the base of all other languages. After that, one can jump to python and R language.

 

3 Instance Where AI Outperformed The Humans

AI Knows From A To Z

 

3-AI-Instances-Where It Proved To Be Smart

 

Target found out the pregnancy of a teenager before her parents did.

An angry father walks into a Target store in Minneapolis, demanding to talk to the manager:

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

A few days later:

“I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

Target had a system which assigns each shopper a “pregnancy prediction” score based on the products they buy. The system could also estimate their due date to within a small window, so Target could send coupons timed to very specific stages of their pregnancy.

This happened in 2012 and it’s hardly state-of-the-art “AI”, but it just goes to show that anything creepy a machine learning model does, is just a product of how and with what data it is trained.

 

Programmer and CMU PhD Tom Murphy created a function to “beat” NES games by watching the score. How it worked was that the program would do things that increased the score, and then learn how to reproduce them again and again, resulting in high scores. It came up with novel techniques and strategies for playing games and even exploited glitches humans didn’t know about, or at least hadn’t told it about. The program, called a “technique for automating NES games,” can take on nearly every NES game. Nearly.

Tom made the program play Tetris. Most of us have played this game and needless to say, we all know that it gets tricky after a certain point. The program struggled to figure out what to do. The choices of Tetris blocks is entirely random, so it’s not surprising that the computer wasn’t able to consider future repercussions far enough ahead to notice that stacking those blocks in a certain ways made a big difference.

On one such run, when faced with imminent defeat, the computer did something eerie. Rather than lose, and receive a ‘game over’ message, it just paused the game. Forever.

Tom describes the computer’s reasoning like this: “The only winning move is to not play.” And that’s right. If you pause a game for ever you will never lose that game.

 

An Artificial Intelligence program developed by Elon Musk’s Team called Open AI created a lot of buzz as well. Musk believes that development of AI should be regulated and AI safety should be a prime concern of every developer. To put weight to his idea, he started a project called OpenAI. The team used DOTA 2 as a test means to develop their AI.

Now what’s special is how they trained this BOT. They didn’t write any code about the rules of Dota 2 or any strategies that professional players use. They just gave basic instructions(eg: Winning is good, losing is bad, Taking Damage is Bad, Giving Damage is good, etc) and made the BOT play with a copy of itself. In the beginning, the BOT made very stupid decisions. But slowly, it started to learn, devise its own strategies and make novel moves. It took the BOT 2 hours to beat the existing Dota 2 BOT and 2 weeks to reach the level of a professional Dota player!

Finally, OpenAI put its BOT to test against many of the world’s top Dota 2 players(1v1 match) and it was easily able defeat them. Then came The International 2017, one of the biggest eSports event in the world. Here, OpenAI was tested against what people consider the best Dota 2 player in the world: Danylo “Dendi” Ishutin. To everyone’s surprise, OpenAI defeated Dendi in a solid 2–0 before Dendi gave up!

 

A Few More Worthy Mentions 

 

The blink recognition software in Nikon’s camera kept asking “Did someone blink” when an Asian would pose in front of the camera. The camera perceived the *small* eyes of Asians as closed.

Recently a report announced that Facebook had to abandon their experiment after two AIs went out of control and supposedly started interacting with each other in a language other than English which made them easier to work! below is what they said to each other.

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

You, i everything else (dot)(dot)(dot)(dot), looks like Bob was devising a plan to kill everyone other than him and Alice.

 

And that ladies & gentleman is how Artificial Intelligence has evolved over the years.

Fun fact, it always is on the move. It is always evolving.

Augmented Reality Is The Future

AR – The Future Tech

 

Augmented reality (AR) is the coordination of digital information with live video and the user’s condition in real time. devices utilized for Augmented Reality are generally those of a computer, camera, processor and screen.

Reasons why Augmented Reality will be a future battleground

  • Recent Launch of Apple AR Kit
  • Social Media Platforms Incorporating Augmented Reality
  • It will change the future of marketing
  • Increasing number of users embracing Augmented Reality
  • Various mobile Apps Utilizing AR

 

 

The most prominent example is in the healthcare industry. You can find more and more professionals engage with augmented reality to leverage their day to day task. 

  • A doctor is able to view a patient’s medical history displayed over the latest medical scan, and even over the patient himself.
  • Healthcare practitioners are now able to project medical imagery on to patient’s bodies using head mounted displays. Projecting CT scans though the display can give doctors “X-ray vision” of patients and provide important contextual cues for diagnosing patients.
  • Patients are educated through simulation about their medical conditions (Cataract or AMD) using apps like Eye Decide.
  • Patients get reminders on taking drugs by wearing Google Glass and having an app installed on the device.
  • A nurse views a perfect highlighted image of the patient’s veins so the IV can be inserted in one painless attempt.

Some more facts and figures that prove that AR is the next big thing in tech are:

  • The dedicated augmented reality market is expected to reach $659.98 million by the end of 2018
  • According to Digi-Capital, AR/VR could hit $150 billion in revenue by 2020, with VR taking around $30 billion and AR $120 billion
  • By the end of 2017, the sales of augmented reality smart glasses is expected to be worth $1.2 billion
  • According to ISACA, 60% to 70% of consumers see clear benefits in using AR and IoT devices in their daily life at work.
  • According to Forrester Research, 14.4 million U.S. enterprise workers expected to utilize smart glasses by 2025.
  • According to Gartner, smart glasses will save nearly $1 billion per year in the field-service industry.

 

AR is not limited to a particular sphere as well. It can be utilized across all spheres of the market for branding & marketing purposes. 

 

1.Construction, engineering and architecture – A holographic representation provides an unmatched level of real-world proportion, scale, form, and perspective compared to traditional ways of building models.

2.Product configurator –The AR/MR apps are useful to product designers because they result in faster prototyping and 3D model visualization.

  1. Healthcare – With AR headsets, doctors and dentists can show their patients a 3D view of the organ or section of the mouth that they are going to operate on.
  2. Education – The main advantage is that 3D images and simulations can be created for students of all age groups. It is ideal for STEM education.
  3. Augmented field service – Companies can equip their field technicians with AR headsets and ensure that experienced engineers are present to guide technicians working in remote locations.
  4. Engaging advertising – Brands can incorporate AR elements in their advertisements and offer coupons to drive customer footfall into the store.
  5. Events – Event organizers and exhibitors are turning to Augmented Reality to increase interactivity at their events which helps in attracting visitors.
  6. Product demonstrations – Augmented Reality apps can give your potential customers an accurate view about the product. Furniture stores, home decorators, fashion stores are ideally suited to take advantage of this technology.
  7. Interactive Websites – Websites which use Augmented Reality have seen a decrease in the bounce rate by their visitors. The result is that sales conversions, downloads and even total page visits increase.
  8. AR-enhanced tours – A tourist walking down a historic place can be given information on his mobile phone which has been overlaid with the real world images.

 

Augmented reality along with virtual reality is changing the world on a daily basis. The applications are unlimited and the possibilities are limited by our imagination only.

4 E-commerce Fraud Detection Trends in 2017

Fraud Detection Trends in 2017

 

Before we welcome the hysterical upheavals of 2018, let us look back at some of the trending online scandals that have created a turmoil in the online market. Unsurprisingly, 2017 was yet another year filled with eerie and troublesome e-commerce and online frauds.  Thus, before you go ahead to start your own retail online business, read on to prepare for self-prevention against potential frauds coming your way.

This post will discuss the frauds and then their detection trend one at a time.

 

Fraud 1: Cross-border frauds

Now, the major growth is happening overseas, especially in Asia, which is also a hub for most of the online frauds. Retailers all around the globe are looking ahead to capitalize their boundaries towards Asia for its growing market. Yet, they will have to take care of rise in cross-border frauds, which again is a Red Cross sign for keeping check.

Trend 1: Remove fraud filters

According to a fraud survey, most of the internationally available merchants rely on Address Verification System (AVS) for order validation, which proved more or less redundant for validation anymore. Therefore, retailers are not much relying on AVS. About 70% of the merchants have set their own rules for international orders, such as, limited payment methods and banned IP addresses from several countries.

Fraud 2: Clean fraud

A criminal mind usually moves two-steps ahead. Privileges of the smarter technology, fraudsters now have more refined methodologies allowing the most sophisticated fraud, popularly termed as, Clean Fraud. In this phenomenon, it is not possible to know until before a chargeback is incurred.

4 Fraud Detection Trends in 2017

Trend 2: Crosschecking the shopper’s transactions

This means a regular and consistent crosschecking of previous transactions to the ongoing ones. Look for repeated orders; it will help you to identify die-hard shoppers and distinguish the swindlers.

Fraud 3: Fraudsters as a disguise of mobile shoppers

With growing technology, shoppers are moving from desktops to mobile phones. In Japan, 50% sale is marked by mCommerce that included fraudsters as well. Therefore, to avoid this, retailers have to be very vigilant.

Trend 3: Tracking frauds by approval rates

Merchants are worried about frauds via mobile shopping and need something very secure to avoid it. The safest way is to track the CNP frauds, keep a check on order approval with a particular shopper. The social apps, GPS location, basic details will definitely help in identifying the fraudster.

Fraud 4: Omni-Channel exodus

Omni channel is spread all across the web, attached to every prestigious brand and pundit thinking. As many would say, it means ‘cross channel being done just well’. This means creating a website and mobile application to give the customer in-store and a better customer experience. However, sometimes a fraudster takes the place of genuine buyer and intervene in customer’s satisfaction.

Trend 4: Synchronize data to personalize the shopping experience

The entire concept of identifying a fraudster from a genuine customer is – to first understanding your customer. An Omni-channel provides you with information, which is required to have a good customer-retailer relation. It is well suited to keep a track of a customer’s entire history, which can leverage the geographic locations, demographic trends, etc., which are necessary to know that your customer is not a fraudster.

While one cannot stop digital frauds completely in e-commerce, methods like these can help in preventing them for sure.

Will AI Overtake Human Creativity?

Virtual Intelligence Is Dangerous

 

“AI will be either the best or the worst thing, ever to happen to humanity”.

Said Stephen Hawking when asked about his opinion on Artificial Intelligence.

 

AI Versus Human Creativity

 

A few months earlier, the greatest South Korean GO player, Lee Sedol was being challenged by Google’s artificial player Alpha GO. GO game is considered to be the toughest game in the world. We can play our first move by 20 different choices in the game chess, while in GO game first move can be played by 361 different ways. After the initial one or two moves, the game becomes more and more complicated.

Lee Sedol had got the status of the professional player in GO game at 12 years of age. He nearly won the 18th international world championship and had became a South Korean superstar at a young age.

So the game played in South Korea from 9th March to 14th March, 2016. 60 million users from China and 25 million users from Japan were watching it live. In South Korea, it was a festive atmosphere as people expected Lee to beat the bot.  However, South Korean hearts broke after the results came out.

 

 

The famous star of GO game, 33 Years old, Lee Sedol lost by 4-1 against the Alpha GO!

South Korea mourned but it also brought forward the fact that Human intelligence would slowly be overshadowed by AI.

However, the fact was that Alpha Go was only calculating way ahead of its counterpart. There was factually no creativity involved in the game of GO.

Largely, the past four decades of AI has focused on ever more sophisticated methods for solving ever more highly constrained problems (e.g. chess, Go, memorizing labeled data-sets like Imagenet, or constrained quiz tasks like Jeopardy).

The field has unfortunately entered a downward spiral where publications are often judged by how well a given method performs on a particular artificial dataset, compared to 20 past methods on the same dataset. This approach of relying on artificial datasets to measure progress can quickly stifle creativity, and I see rampant evidence of this decline at even the best ML/AI conferences, like NIPS or AAAI, where year and year, papers that are accepted are largely highly incremental advances on previous work.

Very novel ideas have little chance of success, because they are usually unable to “play the same game” of showing marginal improvement on MNIST, or Imagenet, or COCO, or one of the dozens of other artificial datasets. It is as if physicists judge their profession by seeing how fast a car they can build with the latest advances in quantum field theory.

Creativity is an ability closely tied to “imagination”. The emphasis in creativity and imagination is not problem-solving at the expert level, but rather “problem creation”, if you will. It is a way of stretching the boundaries of what is possible by being able to ask counterfactual questions. Einstein was a great believer in the power of imagination.

Imagination is what led him to develop the theory of relativity, because he could ask questions like “What would the world look like if I rode a beam of light?” Imagination, he said, “would get you anywhere”, whereas “logic will only get you from A to B”. It is hard to imagine how one can do world class physics these days without a healthy dose of imagination. It is highly likely that this year’s Nobel prize in physics will go to the leaders of the LIGO detectors, which detected Einstein’s gravitational waves, a 100 years after they were predicted. The latest report of detection comes from two black holes that collided 1.8 billion light years away, releasing more energy in this one event than the energy released from all the stars in the observable universe. How can one even begin to understand the power of such events, without using imagination, since it is so far removed from our everyday experience

There is strong evidence that imagination is unique to humans as it is strongly localized in the frontal lobe of the brain, a structure most developed in humans as compared to other animals. Humans with damage to the frontal lobe are largely normal, although they are strikingly “in the present”, and unable to imagine the future. If you ask such a person what their plans are for the next week, they will understand the question, but say that their mind is a complete blank when they try to think of the future. Imagination is largely tied to the processes that go in the frontal lobe, and it is probably also the “seat of creativity”.

 

Jean Michel Basquiat’s untitled painting of a human skull- GoodWorkLabs

 

Fundamental advances are needed to understand how imagination works, and it will take at least the better part of the next decade or two before we begin to develop effective methods. One of our favorite examples of creativity is art. Jean Michel Basquiat’s untitled painting of a human skull sold at a New York auction recently for over $100 million. It is a strikingly original piece of art, and the 27 year old Brooklyn born painter was originally a graffiti artist, whose paintings now command prices similar to Van Gogh, Picasso, and Monet.

Will AI ever be able to produce great art of this caliber?

Perhaps, that day we should be bothered about the future of AI.

 

5 Advantages Of Apache Cassandra

The Amazing Cassandra

 

The Apache Cassandra- an open source distributed database has been a much-anticipated topic of debate, lately. It is a prime choice for the significant amount of App development and data management companies, working at fresh new start-ups and traditional legendary enterprises. It has become remarkably easy to perform the complete transition of a traditional database to an open source database since NoSQL took a leap by introducing the Apache Cassandra.

With a unique yet considerably efficient ability to offer a real-life performance and experience, Cassandra has been making the life of companies in web developments, software engineering, and Data analysis easy. No wonder why back in 2008 Prashant Malik and Avinash Lakshman initially originated the Cassandra at Facebook!

 

How Cassandra benefits businesses?

With an increasing number of businesses opting for the Apache Cassandra, let us have a brief at it advantages for a growing business.

1 – Elastic Scalability

Many say that elastic scalability is the biggest advantage of using the Apache Cassandra. As we have read above, as number of servers can be added, therefore, the now formed Cassandra cluster can be scaled up and down as you please without much hassle, i.e. without any downtime or pause occurring to the applications.

 

5 Advantages Of Apache Cassandra

 

2 – Open Source

We have been hearing a lot about Apache Cassandra offering an Open Source service. What exactly does it mean? Being open source means- it is available to businesses for FREE! Yes, you read that right; you can download it without giving much thought to your pocket. It is not the end of the awesomeness that open sourcing offers, it has a huge Cassandra community where niche-specific people can come together and discuss various aspects of this huge open source projects. It is even compatible to be used with other Apache projects.

 

3 – Peer to Peer Architecture

Unlike a master-slave form of working, the Apache Cassandra follows a peer-to-peer architecture for execution, thus, resulting in rare chances for failure. This makes it possible to add as many servers as your business wants in data centers to make a Cassandra cluster. It means that all the servers are at the same level and any machine can answer the request from a client.

 

4 – Fault Tolerance

Usually, what worries any business the most is whether or not the stored data is safe? Well, Apache Cassandra not only secures the data but stores it at multiple locations. Even if one server fails, or someone hacks it, the user is able to retrieve the data with utmost ease from another location. It is up to your choice how many replications you want to create which is then activated by the high-level backup and recovery competencies of Cassandra.

 

5 – Great Analytics possibilities

There are 4 key methods of carrying out analytics on Cassandra

  1. Solr based integrated search
  2. Batch analytics integrating Hadoop with Cassandra
  3. External batch analytics powered by Hadoop and Cloudera/Hortonworks
  4. Spark based near real time analytics

This significantly expands the range and usage of analytics using Cassandra.

 

So, it is safe to say that Apache Cassandra is a total win-win situation for any organization using the solution. Be it high performance, predictable scaling, distributed database characteristic, or 100% uptime, Cassandra scores heavily on these parameters and emerges as the preferred open-source distributed NoSQL database management system.

 

 

Is AI Going To Fade Like Nanotechnology

Is AI Overhyped Like NanoTech?

 

Nanotechnology was once so hyped and we cannot help compare it with what is happening with AI now. There are so many things that nano can do, but renaming projects to nano just to get funding was what happened among companies in 2000–2005.

Eg: Nano Face wash, Nano *Insert a title*

There is an explanation for this. It can be understood using the hype curve. It works according to Amara’s Law, which is a computer saying ,stating:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

The General Curve is as shown below:

 

According to the Gartner Hype cycle for Artificial Intelligence 2017, AI is at the peak of Inflated Expectations so we can now expect negative publicity marking the stage of Trough of Disillusionment.

 

 

Artificial Intelligence first began in 1950 when the English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence”. But, the Technology trigger happened only over the last decade. We are in the stage where the mass media hype where “Data Science is the way to go”. The expectations are tremendous and we are talking about Robots being given citizenship which is good in some sense and scary as well. Andrew Ngvery recently gave a talk on how we have to move more talent to where it is most needed by training them. This shows how we are moving forward with this AI technology.

  1. We need Data Scientists with skills. Data Science is no more a skill, it is a way.
  2. Data science came long long ago when we first started to generate relations between different things. Now, it has been recognized as a separate entity because computer science boils down to applied mathematics which boils down to functions.
  3. Data Science is indeed very promising and a lot of funding is given to those who do it. (The pay at Goldman Sachs should say it all. It is approximately $104,578-$114,768)
  4. But, for something to become successful, one needs to wait for experiments to happen and results to come out. This is not the case today. We talk data almost everyday that we are busy doing shoddy work to get results out. This is not good and the prime reason why we are entering the phase of disillusionment.

 

Comparison with Nanotechnology

 

With Nanotechnology, the hype index shot very high and peaked mid 2002. It was the Data Science equivalent back then. You’d want to work there. The news was so full of Carbon Nano tubes and how the future was going to change. The news articles at that time went on and on about the miraculous properties of nano materials. But do we talk about it today? We read about it sometimes in the newspapers. That is it.

Nanotech in the mid 2002 was promising and the career prospects were great. But, an analysis showed that it could not live up to its hype because of time. It all comes down to this, doesn’t it?

In the 2005, we had talks of whether Nano is a boon or bane.

As early as in 2008, we got books on the hype of Nano tech “Nano-hype: The Truth Behind the Nanotechnology Buzz

In 2017, we hardly hear about it but some real work is going on. Nano tech is now in the plateau of productivity. Lithium Ion batteries and startups focus (MIT’s 30 under 30 has so many people working on Nano tech and not just Data Science) on this now better but ironically they lack funding because the hype now is data science and investors run towards the hype. Nobody can help this.

 

Comparison with CFC Discovery 

 

When CFC was first invented and its refrigeration properties were identified in discovered in 1928 by Thomas Midgley, he was in search of non toxic alternatives to the existing refrigerants during that time namely Ammonia and Sulfur dioxide. It caught the media and every single refrigerator used it until they found out that it destroyed the ozone layer in 1970. For thirty years no one knew the detrimental effects it had on the environment. And Funny enough it has appeared in the 30 worst discoveries by the leading TIME magazine. Now, it has been banned and we are trying to solve the problem created by the previous solution.

From the above analysis, few points are to be noted:

  1. We tend to provide solutions to solve problems which end up producing further problems and we end up cleaning the mess. We seem to be caught in this cycle.
  2. In every single problem, whether the hype led to productive output, it brought money. From the above, one can infer that “Research goes where money flows” and it is not the other way round. That’s life.
  3. Data Science has been carried out since the beginning of time. Just that it was named Physics, Chemistry, Maths, Biology and so on. It was interpretation of data and the science behind it. So, they named it appropriately.
  4. In today’s exciting world, we want to do anything with data which was not thought of before. Hence, Data Science.
  5. Data Science is a way and not a skill. Mechanical Engineering is a skill. People who understand this will win.

 

The Prominent people like Balaji Viswanathan CEO of Invento who does ML for his bots uses it, Andrew Ng Sees the need to teach it, Adam D’Angelo believes in it. The other CS giants know it. And I, a mechanical Engineering student, am contemplating about this and making sense of it.

The future looks good but this shall also pass. We are going to create solutions, create a mess, clean it up, create a mess, and the cycle will repeat.

 

4 Mistakes To Avoid When Using Redis

Red Is Incredible

 

Redis is an in-memory key value datastore written in ANSI C programming language by Salvatore Sanfilippo.  Redis not only supports string datatype but it also supports list,  set, sorted sets, hashes datatypes, and provides a rich set of operations to work with these types. If you have worked with Memcached, an in-memory object caching system, you will find that it is very similar, but Redis is Memcached++.  Redis not only supports rich datatypes, it also supports data replication and can save data on disk.  The key advantages of Redis are :

 

  1. Exceptionally Fast : Redis is very fast and can perform about 110000 SETs per second, about 81000 GETs per second. You can use the redis-benchmark utility for doing the same on your machine.
  2. Supports Rich data types : Redis natively supports most of the datatypes that most developers already know like list, set, sorted set, hashes. This makes it very easy to solve a variety of problems because we know which problem can be handled better by which data type.
  3. Operations are atomic : All the Redis operations are atomic, which ensures that if two clients concurrently access Redis server will get the updated value.
  4. MultiUtility Tool : Redis is a multi utility tool and can be used in a number of usecases like caching, messaging-queues (Redis natively supports Publish/ Subscribe ), any short lived data in your application like web application sessions, web page hit counts, etc.  There are a lot of people using Redis and they can be found at the owner website.

 

 

Here are a few things we suggest thinking about when you are utilising the superpowers of Redis.

  • Choose consistent ways to name and prefix your keys.  Manage your namespace.
  • Create a “registry” of key prefixes which maps each to your internal documents for those application which “own” them.
  • For every class of data you put into your Redis infrastructure: design, implement and test the mechanisms for garbage collection and/or data migration to archival storage.
  • Design, implement and test a sharding (consistent hashing) library before you’ve invested much into your application deployment and ensure that you keep a registry of “shards” replicated on each server.

 

Let us explain each of these points in brief.

 

You should assume, from the outset, that your Redis infrastructure will be a common resource used by a number of applications or separate modules.  You can have multiple databases on each server numbered 0 through 31 by default, though you can increase the number of these.  However, it’s best to assume that you’ll need to use key prefixes to avoid collisions among various different application/modules.

 

Consistent key prefixing & Managing your namespace:

Your applications/modules should provide the flexibility to change these key prefixes dynamically.  Be sure that all keys are synthesized from the application/module prefix concatenated with the key that you’re manipulating; make hard-coding of key strings verboten.

 

Registry: Document and Track your namespace

We suggest that you have certain key patterns (prefixes or glob patterns) as “reserved” on your Redis servers.  For example you can have __key_registry__ (similar to the Python reserved method/attribute names) as a hash of key prefixes to URLs into your wiki or Trac or whatever internal documentation site you use.  Thus you can perform housekeeping on your database contents and track down who/what is responsible for every key you find in any database.  Institute a policy that any key which doesn’t match any pattern in your registry can/will be summarily removed by your automated housekeeping.

 

Garbage Collection: 

In a persistent, shared, key/value store, and in the case of Redis, in particular the collection of garbage is probably the single major maintenance issue. 

So you need to consider how you’re going to select the data that needs to be migrated out of Redis perhaps into your SQL/RDBMS or into some other form of archival storage, and how you’re going to track and purge data which is out-of-date or useless. 

The obvious approaches involve the use of the EXPIRE or EXPIREAT features/commands.  This allows Redis to manage the garbage collection for you, either relative to your manipulation of any given key, or in terms of an absolute time specification.  The only trick about Redis expiration is that you must reset it every single time.

 

Sharding: 

Redis doesn’t provide sharding.  You should probably assume that you’ll grow beyond the capacity of a single Redis server. Slaves are for redundancy, not for scaling, though you can offload some read-only operations to slaves if you have some way to manage the data consistency, for example the ZSET of key/timestamp values describe for expiry can also be used for some offline bulk processing operations; also the pub/sub features can be used for the master to provide hints regarding the quiescence of selected keys/data.

 

So you should consider writing your own abstraction layer to provide sharding.  Basically imagine that you have implemented a consistent hashing method and you run every synthesized key through that before you use it.  While you only have a single Redis server then the hash to server mapping always ends up pointing to your only server.  Later if you need to add more servers then you can adjust the mapping so that half or a third of your keys resolve to your other  servers.  Of course you’ll want to implement this so that the failure on a primary server causes your library/service module to automatically retry on the secondary and possibly any tertiary server.   Depending on your application you might even have the tertiary attempts fetch certain types of data from an another data source entirely.

 

4 Amazing Messenger Bots

Bots To Look Out For

 

2017 maybe brilliant and gleaming yet as we’re seeing some especially encouraging names in the realm of bots. Since bots initially debuted on Facebook Messenger a year ago, designers have been turning out a large number of the little folks. What’s more, it’s hard not to see a portion of the more creative bots out there. 

We definitely realize that bots hold unbelievable potential for producing leads. Here are a portion of the designers that are doing it right.

The 4 best Facebook Messenger bots of 2017 so far.

 

 

WTF is That 

 

WTF-Is-That-GoodWorkLabs

 

Watch out on this bot. As it is developing, it’s ended up being a particularly helpful little apparatus. This bot can help recognize things from only a photograph from bugs to peculiar sustenance things. The calculations supporting it are a long way from reality as of now. However,  it’s turning out to be an icebreaker and comfort thing across the board.

 

Duolingo 

 

Duolingo-GoodWorkLabs

 

It wouldn’t have been long until a language application went onto the bot scene. Enabling clients to talk with neighborly, supportive bots, Duolingo makes it simple to work on composing and conveying in another dialect. The discussions are restricted.  However they’re an incredible approach to help yourself to remember vital ideas and vocabulary terms. In addition, there’s an assortment of identities to connect with, making language learning fun.

 

MeditateBot 

 

MeditateBot-GoodWorkLabs

 

Staying calm has never been so simple. Exercise-related bots seem like a natural evolution of the entire bot concept and MeditateBot is no different. The bot, developed by the team behind the Calm app, guides users through flexible meditation exercises and allows users to set daily reminders to get into a regular meditation habit.

 

Poncho 

 

Poncho-GoodWorkLabs

 

Climate applications are surely just an old thing new yet Poncho accomplishes something that a significant number of these old applications could not. It gives a brisk climate process as well as a  day by day customized figure. With other clever highlights, including a nitty gritty dust tally and day by day running gauges that anticipate whether the following day will be bright or not, there’s a considerable measure to love also. Additionally, Poncho’s a well disposed little person, who shares jokes and accommodating tips. Who said bots can’t be adorable?

Ready to start building your next technology project?