Here’s how I predicted Apple’s stock price using Natural Language Processing


This post was originally published by Trist'n Joseph at Towards Data Science

An investigation into NLP using sentiment analysis to predict Apple stock price movements

Stock market prediction refers to the act of attempting to determine the future value of a company’s stock (or other financial instruments) that is traded on an exchange. Accurately predicting the stock market is like being able to see into the future. If one could do this, then they will undoubtedly engage in actions that will substantially benefit themselves. Imagine knowing that Apple’s stock will increase from $300 per share by 80% tomorrow, and currently have the ability to buy 10 shares.

That will guarantee a return of $2,400 in one day with possibly minimal effort. And if one could do this consistently over at least a year? Wow. Unfortunately, however, accurately predicting stock prices are not that simple. There are many factors to consider that can affect a stock’s price and building a model that consists of all these factors will most likely result in poor predictions on out of sample data. But the stock market also tends to be forward-looking, which means that it reflects investor outlook on the economy. Because of this, I used Natural Language Processing (NLP) and attempted to create a model that predicts Apple’s stock price by using the market sentiment from the prior trading day.

NLP generally refers to the manipulation of natural languages, such as text, by software. Some of the most common applications of NLP include speech recognition, chatbots, autocorrect, virtual assistants, and sentiment analysis. For this project, I used sentiment analysis.

This refers to the interpretation and classification of emotions within a text and allows for the identification of sentiment (or feeling) towards a particular thing. Models that use sentiment analysis often focus on the polarity (the negativity or positivity) of text. Many articles are published daily and provide information regarding the markets or updates on companies that are publicly traded. The information presented could persuade individuals to either buy or sell their stocks, which can affect the stock’s price when done on a large scale.

Articles are referred to as unstructured data (or unorganized data) and this makes them hard to understand, analyze, and sort through. Sentiment analysis is particularly useful because it makes sense out of unstructured data by efficiently processing huge amounts of data and automatically tagging it by polarity. The output of sentiment analysis is also consistent, and this is important because an individual’s interpretation of sentiment is biased towards their point of view. In fact, it is estimated that people only agree about 65% of the time when determining the sentiment of a piece of text. Therefore, the process of sentiment analysis includes refining a document and extracting keywords which will then be ranked by comparing these words to a predefined lexicon containing polarities.

For this project, I collected 267 Apple-related articles that were hosted on published between February 20th, 2020, and June 12th, 2020. Articles are not only published during the week or on trading days; it is common for an article to be posted on the weekend or non-trading day, and these need to be considered as well. Because of this, any article published on a non-trading day was recorded as the next available trading day.

Quite often, a quick check on the polarity can allow one to either set expectations about the problem or help one learn more about the problem. By doing this, I noticed that the polarity of articles was centered slightly above zero. A possible reason for this is that authors tend to exploit negative headlines while presenting positive information within the body of an article.

I also found that the relationship between the number of words within an article and the magnitude of an article’s polarity differed between negative and positive articles. The data showed that the number of words in a negative article and the polarity of negative articles are negatively related, whereas the number of words in a positive article and the polarity of positive articles are positively related.

I assume that this makes sense and it speaks to the amount of effort placed in making an article more negative or more positive. Consider an unimpressed individual writing an article about a negative experience they encountered; they will most likely leave a longer review the worse their experience was.

Knowing this, I decided to create a comparison word cloud to determine what words were most associated with negative and positive articles. I found that “losses”, “earnings” and “supply” were keywords within negative articles, while “streaming” and “gains” were keywords within position articles.

Given that a large portion of articles was published during the coronavirus lockdown period, it makes sense why “streaming” would be associated with positive articles and “supply” with negative ones. Many individuals subscribed to new streaming services, including Apple TV+, which has the potential to give investors a better outlook on the future of Apple. However, there were many concerns that the lockdowns will affect Apple’s production of upcoming products and the supply of current products, which has the potential to give investors a less than stellar outlook on Apple’s future.

Although both the relationship between polarity and effort, as well as the comparison word cloud, produced plausible results, I believe that they can be made better by using a more appropriate, financial lexicon. For this analysis, I used the AFINN lexicon; this assigns words a score that runs between -5 and 5, where negative scores indicate negative sentiments. I found AFINN to be more appropriate than both the NRC and Bing lexicons. NRC categorizes words into groups like “anger” and “disgust”, while Bing categorizes words into “positive” versus “negative”. Despite AFINN being most appropriate, I believe that creating a carefully curated financial lexicon that could appropriately classify words will produce even better results.

By looking at the relationship between standardized prices and polarity, I noticed that polarity had a lagged cumulative effect on Apple’s stock price. Because of this, I hypothesized that the stock price can be modeled by the recursive function `stock price tomorrow = (price today) + constant*(price today)*(sentiment today)`. The function operates by accepting an initial stock price and a vector of sentiments, and every subsequent price is then predicted using the recurrence relation.

To emphasize, the stock price on day 0 is entered as an initial price for the model to then predict the price tomorrow (day 1). When the model then needs to predict the price for day 2, the predicted value from day 1 is automatically used as the input price; on day 3, day 2’s predicted price will be automatically used as the input price and so on.

The optimal `constant` is determined by choosing that value which minimizes the mean absolute prediction error. The model’s output follows a similar trend to Apple’s actual stock prices up until April but completely diverges thereafter. After analyzing the point of divergence, I realized that the effect of constant negative news caused the increase in price from positive news to struggle. Therefore, I updated the model such that positive sentiments got a higher weighting from the `constant` term than negative sentiments.

The function was then again optimized by choosing the pair of constants which minimized the mean absolute prediction error. The use of this model significantly decreased the divergence between the predicted and actual stock prices. Lastly, I investigated a recurrence model which stated `stock price tomorrow = (price today) + price_constant*(price today)*(sentiment today) — volume_constant*(volume today)*(sentiment today)`. The predictions from this model yielded the lowest prediction error when compared to other models created during this project, and the predicted values are seen below.

I am still working on this project and I am currently testing the model framework on Goldman Sachs and Exxon Mobil stocks over a similar period to determine whether I will see similar results. I would love any feedback on this work thus far, and I am open to having others work alongside me for the remainder of this project if interested. Please feel free to point out any mistakes that I might have made, or to suggest anything that might have been overlooked.

Spread the word

This post was originally published by Trist'n Joseph at Towards Data Science

Related posts