A short endeavour in neuromarketing

Yoyo Yuan
5 min readJul 6, 2022

--

Video version of the below here

Recall the last time you lied about terrible food being great. This social pressure to be polite can stunt the growth of video game developers, marketers, YouTubers and pretty much everyone.

Not only that, participants can forget, misunderstand themselves or suffer from the wording effect. Even visual examination can’t easily expose lying.

But, it is possible to detect emotions before higher thought occurs via facial electromyography (fEMG).

fEMG records and processes muscle signals from the skin.

Even just looking at two muscle groups can be valuable for improving advertisements.

We will look at the zygomaticus major and the corrugator muscles, corresponding to smiling and frowning respectively. They output as yellow, or red LEDs as inspired by this tutorial (with blue being neutral).

Emotional valence would be demonstrated while watching youtube ads.

Stages

Before turning on the LEDs, we’ll need to record, amplify, and process signals.

Recording

The motor cortex initiates muscle movement. The signal travels down through the spinal cord, to the upper and then lower motor neurons, which directly connect to cells.

This creates a chain of events, including the release of calcium ions from muscle fibres, triggering muscle contraction. The more frequent and more muscles involved in contraction, the higher the recorded voltage amplitude will be.

Amplifying

Two muscle groups correspond to two pairs of electrodes, which are just metal discs. Like a wave on a lake, the wave of depolarization pass underneath the skin.

Wave of calcium ions in the mouse cardiac cell.

If a pair of electrodes is too close together, there would be less potential difference. The electrical signals travel through the wire to the board, where signals are amplified and converted into digital signals

Processing

Software used: OpenBCI, python and Arduino IDE

Here in OpenBCI we have a graph with amplitude on the y-axis and time on the x-axis.

There is visible contraction and relaxation in channels 1 and 2. In the lower right corner, an LSL stream starts. LSL is a protocol for sending lab data across the same network, in this case, to python.

Python helps to automate the communication between LSL and the Arduino board.

Set up

Here, we import necessary libraries, set the minimum time before the next detectable flex and the amplitude threshold for a signal to be considered.

Code from OpenBCI tutorial Github

Main loop

Inside this while loop, we use if statements to detect if the EMG data spikes pass the 0.3 threshold and if 300 miliseconds have passed.

Now we can finally program the Arduino.

The full code can be found here

The loop simply converts the letter received from serial — either R, B or Y, into a command to turn on the corresponding LEDs. Three colours are 3 repeated parallel circuits

Circuit for one colour of LED

In each circuit, we start with an NPN transistor. The transistor is a physical if-statement.

In an NPN transistor, the right two are input pins with the left being the output. If the collector receives 5 V and the base receives an Arduino signal “HIGH”, then there is a current to the LED. If the collector receives 5 volts but the base doesn’t receive HIGH, the LED does not turn on.

Demo

I watched a volume of ads in the compilation videos and feature two of them in the video, starting at 4:32

The ads are new to me, about 35 seconds each. During the viewing, I’m also trying to limit facial expressions.

It is clear that Get Shaved in the Face was more intense and caused discomfort from 3 smile and 6 frown signals, whereas Gangnam Style Baby was less intense, producing 1 smile and 1 frown signal.

The emotional positivity can be plotted against time to infer which elements from the clip had the most impact.

If results are consistent across hundreds of viewers, the company could decide to improve their ad with higher confidence.

But don’t leave yet!

fEMG can be complemented with eye-tracking and electrodermal activity to detect where someone looks and the emotional response intensity. The field overall is known as neuromarketing.

For example, the company Affectiva uses deep learning, computer vision, speech analytics and collected data to detect nuanced emotional states like drowsiness.

It has tested how consumers react to vehicle environments for car manufacturers, fleet management companies and rideshare providers to remodel the interior. (More about this in another article.)

Affectiva is spun out of the MIT media lab and tested more than 53 000 ads in 90 countries.

Advantages

  • Does not depend on cognitive effort or memory
  • Detects non-visible muscle twitches unlike in computer vision
  • Can register the response even when participants are inhibiting their responses

Disadvantages

  • May alter natural facial expression
  • Electrodes are bulky and limit the number of muscles that can be recorded
  • Results can still be altered by medicine e.g. muscle relaxants

Outro

Here, we’ve only shown the use of two muscle groups. But with more electrodes pairs with a total of 43 muscles in the face, elementary emotions, e.g. joy, surprise, fear and even complex emotions could be detected. Adding technologies ranging from deep learning to eye-tracking will create a multitude of use cases.

--

--

Yoyo Yuan
Yoyo Yuan

No responses yet