
BluePes Blog: Insights & Trends

Emotion Recognition
It is obvious that emotions are peculiar to humans and some social animals, like apes, wolves, crows. Emotion recognition is an important part of the communication between people. The efficiency of humans’ interactions depends on how we can predict the behavior of the other person we are interacting with, and, as a result, adjust or change our behavior. Fear can indicate danger; satisfaction indicates that the conversation is successful. Emotion recognition is not an easy task, as the same emotion may be shown differently by different people. With this being said, most people have no trouble distinguishing basic emotions such as fear, anger, disgust, happiness, or surprise, to list a few examples. The question that arises here is whether we can teach a computer to recognize emotions. Because of the advancements made in recent years, the answer is yes. Automatic emotion recognition is a field of study in AI. It is a process of identifying human emotion by leveraging techniques from multiple areas, such as signal processing, machine learning, computer vision, natural language processing. But before we discuss automatic emotion recognition in detail, it is important to explore why this technology is necessary at all. Well, as we already mentioned above, emotions are a powerful source of information. Different surveys said that verbal components convey one-third of human communication, and nonverbal components convey two-thirds. So, successful human-computer interaction needs this channel of communication.
- Mykola Lavrskyi
- Sep 30, 2019
- 6 min
What is Natural Language Processing (NLP)?
Natural Language Processing (NLP) focuses on using computers to understand and derive meaning from human languages. In this formulation, the challenge for NLP is an extremely difficult one. The average 20-year-old native speaker of American English knows 42,000 words (from 27,000 words for the lowest 5% to 52,000 for the highest 5%) and thousands of grammatical concepts. We need a large volume of linguistic knowledge for communication in a professional context, as well as writing books and articles, which we spend decades developing. On the other hand, in everyday life, our language needs are less complex; using a vocabulary of 3000 words is enough to cover around 95% of common texts, such as news items, blogs, tweets, and learning from a text context. This facilitates the process of meaning extraction for computers, especially in terms of performing “simple” tasks like summarization, relationship extraction, topic segmentation, etc.
- Mykola Lavrskyi
- Sep 26, 2019
- 6 min
Fraud Detection
Fraud losses are the subject of constant interest by organizations and individuals alike. Interest in this area is justified, given that in 2018, 49% of organizations said they had been victims of fraud and economic crime according to PwC. Worldwide card fraud losses totalled $24.26 billion in 2017 according to The Nilson Report. Fraud is a widespread, global issue. Organizations should always monitor their data in order to be fraud resistant. The automatization of this process can reduce costs and detect fraud faster. A powerful helper in fraud detection and understanding how fraud works is Data Science. In addition to detecting known types of fraud, data analysis techniques help to uncover new types of fraud.
- Mykola Lavrskyi
- Sep 02, 2019
- 4 min

Computer Vision
Computer Vision (CV) is one of Artificial Intelligence’s cutting-edge topics. The goal of CV is to extract information from digital images or videos. This information may relate to camera position, object detection and recognition, as well as grouping and searching image content. In practice, the extraction of information is a big challenge, which requires a combination of programming, modeling, and mathematics, in order to be completed successfully. Interest in Computer Vision began to emerge among scholars in the 60’s. In those days, researchers worked on extracting 3D information from 2D images. While some progress was made in this regard, imperfect computing capacity and small isolated groups caused slow development of the field. The first commercial application using Computer Vision was an optical character recognition program, which emerged in 1974. This program interpreted typed or handwritten text, with the goal of helping the blind or visually impaired. Thanks to growing computing power and NVIDIA’s parallelizable GPU, significant progress was achieved in deep learning and convolutional neural networks (CNN).
- Mykola Lavrskyi
- Aug 26, 2019
- 5 min

Data Science in Human Resources
Do companies need to use Data science when hiring new employees? Big data has changed the requirement process, and most organizations’ activities more broadly. The scientific analysis era has touched the human resources sector too. Effective data science techniques can provide better quality, higher accuracy, and a cost-effective outcome for HR. Let’s see how data science techniques can help with different fields and work phases of HR.
- Mykola Lavrskyi
- Aug 07, 2019
- 5 min

Data Science in E-Commerce
More than 20 years ago, e-commerce was just a novel concept, until Amazon sold their very first book in 1995. Nowadays, the e-commerce market is a significant part of the world’s economy. The revenue and retail worldwide expectations of e-commerce in 2019 were $2.03 trillion and $3.5 trillion respectively. This market is developed and diverse both geographically and in terms of business models. In 2018, the two biggest e-commerce markets were China and the United States, with revenues of $636.1 billion and $504.6 billion respectively. Currently, the Asia-Pacific region shows a better growth tendency for e-commerce retail in relation to the rest of the world. Companies use various types of e-commerce in their business models: Business-to-Business (B2B), Business-to-Consumer (B2C), Consumer-to-Consumer (C2C), Consumer-to-Business (C2B), Business-to-Government (B2G), and others. This diversity has emerged because e-commerce platforms provide ready-made connections between buyers and sellers. This is also the reason that B2B’s global online sales dominate B2C: $10.6 trillion to $2.8 trillion. Rapid development of e-commerce generates high competition. Therefore, it’s important to follow major trends in order to drive business sales and create a more personalized customer experience. While using big data analytics may seem like a current trend, for many companies, data science techniques have already been customary tools of doing business for some time. There are several reasons for the efficiency of big data analytics: · Large datasets make it easier to apply data analytics; · The high computational power of modern machines even allows data-driven decisions to be made in real time; · Methods in the field of data science have been well-developed. This article will illustrate the impact of using data science in e-commerce and the importance of data collection, starting from the initial stage of your business.
- Mykola Lavrskyi
- Aug 05, 2019
- 7 min
Predictive Analytics Workflow
Many companies use predictive models in their activity to provide better customer service, sell more products and services to customers, manage risk from fraudulent activity, and better plan the use of their human resources (to list a few important examples). How does predictive analysis offer all of these benefits? In this article we will consider the process of predictive analytics, and its related advantages.
- Mykola Lavrskyi
- Jul 22, 2019
- 4 min

Introduction to Data Science: Resources Available Online
Data Science is a highly developing field, with a steady upslope of demand for data scientists. Job openings for data scientists have increased by 56% over the past year, according to LinkedIn. There are more and more people who want to start their career in Data Science, or plan to use some Data Science techniques in their work. An important question emerges for the people following this route: “Where can I start learning Data Science?” There is no simple answer to this question. Data Science is a complex multi-disciplinary field. It employs techniques and theories from statistics, multivariable calculus, linear algebra, and Machine Learning. Data scientists need good knowledge in the fields mentioned above, as well as strong programming and data visualization skills. There are many offline and online university programs for those who want to gain a degree in Data Science. In this article, we will consider the case of a person who already has enough background in math, statistics, and programming, and focus on online resources specifically for Data Science. The basic concepts and techniques of Data Science can be learned in different ways, but, in general, it is better to use a resource that gives a complete picture of the subject, such as MOOCS. E-books are also very useful in understanding the basic concepts of Data Science. Usually, books open the subject deeper, but less widely than MOOCS. So, in my opinion, the best way to start is to find a MOOC or e-book that corresponds to your skill level (according to the requirement skills for Data Science mentioned above). For your reference, we have listed below some MOOC platforms, courses and e-books that can be helpful for beginners. MOOCS:
- Mykola Lavrskyi
- Jul 16, 2019
- 6 min
Predictive Analysis in Business
Decision-making in business is often based on assumptions about the future. Many companies aspire to develop and deploy an effective process for understanding trends and relationships in their activity in order to gain forward-looking insight to drive business decisions and actions. This is called predictive analytics. We can define predictive analytics as a process that uses data and a set of sophisticated analytic tools to develop models and estimations of an environment's behavior in the future. In predictive analysis, the first step is to collect data. Depending on your target, varied sources are using, such as web archives, transaction data, CRM data, customer service data, digital marketing and advertising data, demographic data, machine-generated data (for example, telemetric data or data from sensors), and geographical data, among other options. It is important to have accurate and up to date information. Most of the time, you will have information from multiple sources and, quite often, it will be in a raw state. Some of it will be structured in tables, while the rest will be semi-structured or even unstructured, like social media comments. The next important step is to clean and organize the data - this is called data preprocessing. Preprocessing usually takes up 80% of the time and effort involved in all analysis. After this stage, we produce a model using already existing tools for predictive analytics. It is important to note that we use collected data to validate the model. Such an approach is based on the main assumption of predictive analytics, which claims that patterns in the future will be similar to the ones in the past. You must ensure that your model makes business sense and deploy the analytics results into your production system, software programs or devices, web apps, and so on. The model can only be valid for a certain time period, since reality is not static and an environment can change significantly. For example, the preferences of customers may change so fast that previous expectations become outdated. So, it is important to monitor a model periodically. There are plenty of applications for business based on predictive analytics. To conclude this article, we will briefly consider some of them.
- Mykola Lavrskyi
- Jul 05, 2019
- 5 min