By Bill West
Worldwide e-commerce sales were estimated to exceed $2.842 trillion in 2018, which is over double the amount as recently as 2015, and it will continue to grow as e-commerce sales are forecasted to be $4.87 trillion in 2021, according to industry reports. With this growth comes increased visibility. Subsequently, the e-commerce community has become a common target for malicious hackers, led in large part by a host of spambots.
Today, bots constitute almost half of all web traffic. A good portion of bot traffic, however, is in the category of “good bots,” which include search engine bots as well as bots that perform monitoring functions and other essential tasks. That leaves more than 20 percent of web traffic belonging to “bad bots.” Bad bots can do malicious harm to a web business through distributed denial of service (DDoS) attacks, data theft, site scraping, or just annoying spam attacks.
These bots are designed to bypass and evade even the most advanced detection techniques, and their rapid evolution puts most traditional web security solutions at a disadvantage. Simply put, they are outpacing the technology used to protect against them.
There are several ways bots can have damaging effects on an e-commerce business:
- Site scraping of product listing details and pricing
- Click fraud to increase digital ad spending
- Fake account creation where these accounts inundate the site with new user registrations
- Form spam, which are bot submissions both in and out of a browser to your contact form, newsletter signup, and other forms on your site.
Form spam is often considered one of the most frustrating issues web owners deal with on a daily basis and it’s a drain on time and resources. Form spam bots submit unwanted information over and over again, continually gaining traction until it breaches your security measures. They are often very difficult to eradicate.
Some of the unwanted data submissions involve advertisements, links to product offers, phishing URLs used to steal your information, and other types of links. Spammers work diligently to create bots that automatically seek out web forms for the purpose of transmitting unwanted and often malicious information. These form spam submissions proliferate throughout a company’s email system in order to generate traffic and ad revenue or direct people to phishing sites that collect personal information for criminal use.
In order to combat these form spam bots, websites often employ a series of textual or picture quizzes to the user submitting the form such as a CAPTCHA. These tests may require a user to type in a set of letters and numbers or click on a set of pictures that are of a specific thing, like a storefront or a street sign. CAPTCHAs are somewhat effective for standard bots, but they are annoying to website visitors and often lead to abandoned shopping experiences and decreased site sales.
Research has shown that a large percentage of prospective buyers depart a site immediately upon being presented with a CAPTCHA, and as many as 40 percent fail on the first attempt. This degrades the customer experience, stopping prospective customers before they can complete a transaction. In addition, bots are evolving to the point where these measures are becoming obsolete and ineffective.
Unlike other bot security detection and mitigation services, my firm, Ellipsis Technologies, utilizes an approach that maps natural, organic movements and applies that logic to all future site visits to determine if a site visitor is exhibiting human behavioral characteristics or not. We call it The Human Presence™, and it’s based on human behavior analysis, proprietary algorithms, and machine learning techniques. The machine learning tools allow our application to evolve and improve over time, providing highly sensitive discrimination between human and bot behaviors.
Unlike a CAPTCHA, The Human Presence works in the background, is totally invisible to site visitors, and requires no interaction or response at all, allowing site visitors to shop without encumbrance.
Our technology identifies non-human behavior within milliseconds, allowing the site operator to choose how to respond to suspicious traffic. For instance, the site operator can let human visitors continue on the site without interference, while choosing to automatically test the suspected non-human site visitors with additional verification steps such as a CAPTCHA or routing the bots elsewhere.
In short, the goal is to improve the user experience for legitimate human site visitors while identifying and defending against spambots and other malicious traffic.