Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (2024)

This series of articles is dedicated to understanding AI/ML and how it relates to Fx trading. Most articles focus on predicting a price and are almost useless when it comes to finding profitable trading strategies and hence, that’s the focus here.

I have traded Fx for 20 years using both traditional statistical and chart analysis and AI/ML for the last 5 years or so. With a bachelor of engineering, masters and several certificates in Machine Learning I wanted to share some of the pitfalls that took me years to learn and explain why its difficult, but not impossible, to make a system work.

In the first two articles we:
1. Created the most basic “hello world” example. We were about to gather data, generate a model and measure our result
2. We built on it to get “in the ball park” and maybe slightly better than guessing and improved our measurement.
3. In the 3rd we peered under the covers of Logistic Regression to find its limitations.
4. In this article we will address the normalization problem.

This is in no way financial advice and does not advocate for any specific trading strategy but instead is designed to help understand some of the details of the Fx market and how to apply ML techniques to it.

Normalization is the process in which we put all the data in the same scale. Consider the price of the AUDUSD over the last 7 years.

Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (2)

The price has shifted from approximately 0.55 to 0.90 . Remember from last article Logistic Regression uses a “weight: (w) multiplied by the feature (x which is price in this case) and so if the price changes significantly the weight value looses its accuracy. Also in “training” the weight will be scaled for the average price and loose any practical usefulness when the price gets away from it.

Remember our model uses the past 4 hours close price (for now) to predict of the price will go up (long trade) by 200 points. So lets look at 5 random occasions that occurred. We will re-use code from previous articles but with a few changes

Firstly import our data

#
# IMPORT DATA From github
#

import pandas as pd
from datetime import datetime

url = 'https://raw.githubusercontent.com/the-ml-bull/Hello_World/main/Fx60.csv'
dateparse = lambda x: datetime.strptime(x, '%d/%m/%Y %H:%M')

df = pd.read_csv(url, parse_dates=['date'], date_parser=dateparse)

df.head(n=10)

Then calculate price changes etc. Note we have added each forward period to be calculated separately. This helps with charting but will become the basis for our improved results metrics in a future article.

#
# Create time shifted data as basis for model
#

import numpy as np

df = df[['date', 'audusd_open', 'audusd_close']].copy()

# x is the last 4 values so create x for each
df['x_open'] = df['audusd_open'].shift(4)
df['x_t-4'] = df['audusd_close'].shift(4)
df['x_t-3'] = df['audusd_close'].shift(3)
df['x_t-2'] = df['audusd_close'].shift(2)
df['x_t-1'] = df['audusd_close'].shift(1)

# add all future prices to measurement point
df['y_t-0'] = df['audusd_close']
df['y_t-1'] = df['audusd_close'].shift(-1)
df['y_t-2'] = df['audusd_close'].shift(-2)
df['y_t-3'] = df['audusd_close'].shift(-3)

# y is points 4 periods into the future - the open price now (not close)
df['y_future'] = df['audusd_close'].shift(-3)
df['y_change_price'] = df['y_future'] - df['audusd_open']
df['y_change_points'] = df['y_change_price'] * 100000
df['y'] = np.where(df['y_change_points'] >= 200, 1, 0)

Now lets chart the 5 random occasions

#
# Chart data
#
import random
import matplotlib.pyplot as plt

for chart_ix in range(5):

random_ix = random.randint(0, len(true_events_df))
event = true_events_df.iloc[random_ix]

x = [-4, -3, -2, -1, 0, 1, 2, 3]
y = event[['x_t-4', 'x_t-3', 'x_t-2', 'x_t-1', 'y_t-0', 'y_t-1', 'y_t-2', 'y_t-3']]

event_date = event['date'].strftime('%Y-%b-%d %H')
plt.plot(x, y, label=event_date)

#print('{:.5f}, {:.5f}, {:.5f}'.format(event['audusd_open'], event['y_t-3'], event['y_t-3']-event['audusd_open']))

plt.axvline(x = 0, color='black')
plt.legend()
plt.show()

Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (3)

You can clearly see we are operating at very different scales on each event. However, in Logistic Regression our weights are fixed and the same for every event. Hence, its just won’t work effectively. The input data (x variables) need to be in the same scale.

In Fx there are a number of methods we can use to rescale the data.

Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (4)

The Fx market works and talks using Points (or PIPs to be more precise — for another article). Traders often set their “take profits” and “stop losses” at a price or number of points. It’s occasionally done using percentages but its not as common. Hence points can make things readable and potentially provide just as good results.

Using points the above chart becomes:

#
# Chart 5 occations where price went up 200 points using points instead of price
#
import random
import matplotlib.pyplot as plt

for chart_ix in range(5):

random_ix = random.randint(0, len(true_events_df))
event = true_events_df.iloc[random_ix]

x = [-4, -3, -2, -1, 0, 1, 2, 3]
#y = event[['x_t-4', 'x_t-3', 'x_t-2', 'x_t-1', 'y_t-0', 'y_t-1', 'y_t-2', 'y_t-3']]

y_points = (event[['x_t-4', 'x_t-3', 'x_t-2', 'x_t-1']] - event['x_open']) * 100000
y_points['y_t-0'] = (event['y_t-0'] - event['audusd_open']) * 100000
y_points['y_t-1'] = (event['y_t-1'] - event['audusd_open']) * 100000
y_points['y_t-2'] = (event['y_t-2'] - event['audusd_open']) * 100000
y_points['y_t-3'] = (event['y_t-3'] - event['audusd_open']) * 100000

event_date = event['date'].strftime('%Y-%b-%d %H')
plt.plot(x, y_points, label=event_date)

plt.axvline(x = 0, color='black')
plt.legend()
plt.show()

Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (5)

Immediately we can see things are now scaled in the same order. But, lets compare price, points, percentage, minmax and standard deviation techniques against each other. As always, lets start with a hypothesis (they are about the same but points is more readable) and see if it checks out. Note our y variable is always 0 or 1 but we will normalize the “chart” so its in the same scale (using price or points will blow out the scale and make it unreadable)

To do this we are going to have to create a lot of functions based upon the work we have done previously so this is going to pull everything together to create an iterative approach to finding the difference between the normalization methods (github reference at the bottom of the page)

from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import MinMaxScaler, StandardScaler

def normalize_data(df, method='price'):

norm_df = df.copy()
x_fields = ['x_t-4', 'x_t-3', 'x_t-2', 'x_t-1']
y_fields = ['y_t-0', 'y_t-1', 'y_t-2', 'y_t-3']

if method == 'price':
for field in x_fields:
norm_df[field + '_norm'] = df[field]
for field in y_fields:
norm_df[field + '_norm'] = df[field]

if method == 'points':
for field in x_fields:
norm_df[field + '_norm'] = (df[field] - df['x_open']) * 100000
for field in y_fields:
norm_df[field + '_norm'] = (df[field] - df['audusd_open']) * 100000

if method == 'percentage':
for field in x_fields:
norm_df[field + '_norm'] = (df[field] - df['x_open']) / df[field] * 100
for field in y_fields:
norm_df[field + '_norm'] = (df[field] - df['audusd_open']) / df[field] * 100

if method == 'minmax':
scaler = MinMaxScaler()
scaled = scaler.fit_transform(df[x_fields + y_fields])
norm_field_names = [x + '_norm' for x in x_fields + y_fields]
norm_df[norm_field_names] = scaled

if method == 'stddev':
scaler = StandardScaler()
scaled = scaler.fit_transform(df[x_fields + y_fields])
norm_field_names = [x + '_norm' for x in x_fields + y_fields]
norm_df[norm_field_names] = scaled

return norm_df

def get_class_weights(y_train, display=True):

#
# Create class weights
#
from sklearn.utils.class_weight import compute_class_weight

num_ones = np.sum(y_train)
num_zeros = len(y_train) - num_ones

classes = np.unique(y_train)
class_weights = compute_class_weight(class_weight='balanced', classes=classes, y=y_train)
class_weights = dict(zip(classes, class_weights))

if display:
print('In the training set we have 0s {} ({:.2f}%), 1s {} ({:.2f}%)'.format(num_zeros, num_zeros/len(y_train)*100, num_ones, num_ones/len(y_train)*100))
print('class weights {}'.format(class_weights))

return class_weights

def get_train_val(df):
#
# Create Train and Val datasets
#

x = df[['x_t-4_norm', 'x_t-3_norm', 'x_t-2_norm', 'x_t-1_norm']]
y = df['y']
y_points = df['y_change_points']

# Note Fx "follows" (time series) so randomization is NOT a good idea
# create train and val datasets.
no_train_samples = int(len(x) * 0.7)
x_train = x[4:no_train_samples]
y_train = y[4:no_train_samples]

x_val = x[no_train_samples:-3]
y_val = y[no_train_samples:-3]
y_val_change_points = y_points[no_train_samples:-3]

return x_train, y_train, x_val, y_val, y_val_change_points

from sklearn.metrics import log_loss, confusion_matrix, precision_score, recall_score, f1_score

def show_metrics(lr, x, y_true, y_change_points, display=True):

# predict from teh val set meas we have predictions and true values as binaries
y_pred = lr.predict(x)

#basic error types
log_loss_error = log_loss(y_true, y_pred)
score = lr.score(x, y_true)

#
# Customized metrics
#
tp = np.where((y_pred == 1) & (y_change_points >= 0), 1, 0).sum()
fp = np.where((y_pred == 1) & (y_change_points < 0), 1, 0).sum()
tn = np.where((y_pred == 0) & (y_change_points < 0), 1, 0).sum()
fn = np.where((y_pred == 0) & (y_change_points >= 0), 1, 0).sum()

precision = 0
if (tp + fp) > 0:
precision = tp / (tp + fp)

recall = 0
if (tp + fn) > 0:
recall = tp / (tp + fn)

f1 = 0
if (precision + recall) > 0:
f1 = 2 * precision * recall / (precision + recall)

# output the errors
if display:
print('Errors Loss: {:.4f}'.format(log_loss_error))
print('Errors Score: {:.2f}%'.format(score*100))
print('Errors tp: {} ({:.2f}%)'.format(tp, tp/len(y_val)*100))
print('Errors fp: {} ({:.2f}%)'.format(fp, fp/len(y_val)*100))
print('Errors tn: {} ({:.2f}%)'.format(tn, tn/len(y_val)*100))
print('Errors fn: {} ({:.2f}%)'.format(fn, fn/len(y_val)*100))
print('Errors Precision: {:.2f}%'.format(precision*100))
print('Errors Recall: {:.2f}%'.format(recall*100))
print('Errors F1: {:.2f}'.format(f1*100))

errors = {
'loss': log_loss_error,
'score': score,
'tp': tp,
'fp': fp,
'tn': tn,
'fn': fn,
'precision': precision,
'recall': recall,
'f1': f1
}

return errors

import random
import matplotlib.pyplot as plt

def chart(norm_df, event_ix_to_plot, norm_method, errors):

fig, ax = plt.subplots()

for ix in event_ix_to_plot:

event = norm_df.iloc[ix]

x = [-4, -3, -2, -1, 0, 1, 2, 3]
y = event[['x_t-4_norm', 'x_t-3_norm', 'x_t-2_norm', 'x_t-1_norm', 'y_t-0_norm', 'y_t-1_norm', 'y_t-2_norm', 'y_t-3_norm']]

event_date = '{}'.format(ix) + ' - ' + event['date'].strftime('%Y-%b-%d %H')
ax.plot(x, y, label=event_date)

ax.axvline(x = 0, color='black')
ax.legend(loc='lower right')

ax.set_title('Method: {}'.format(norm_method))

textstr = 'loss: {:.2f}\nTP: {}\nFP: {}\nPrecision: {:.2f}%\nRecall: {:.2f}%\nF1: {:.2f}%'.format(
errors['loss'], errors['tp'], errors['fp'], errors['precision']*100, errors['recall']*100, errors['f1']*100)
props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
ax.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=8,
verticalalignment='top', bbox=props)

plt.show()

import numpy as np
import pandas as pd
from datetime import datetime

def load_data():
url = 'https://raw.githubusercontent.com/the-ml-bull/Hello_World/main/Fx60.csv'
dateparse = lambda x: datetime.strptime(x, '%d/%m/%Y %H:%M')

df = pd.read_csv(url, parse_dates=['date'], date_parser=dateparse)

df = df[['date', 'audusd_open', 'audusd_close']].copy()

# x is the last 4 values so create x for each
df['x_open'] = df['audusd_open'].shift(4)
df['x_t-4'] = df['audusd_close'].shift(4)
df['x_t-3'] = df['audusd_close'].shift(3)
df['x_t-2'] = df['audusd_close'].shift(2)
df['x_t-1'] = df['audusd_close'].shift(1)

# add all future prices to measurement point
df['y_t-0'] = df['audusd_close']
df['y_t-1'] = df['audusd_close'].shift(-1)
df['y_t-2'] = df['audusd_close'].shift(-2)
df['y_t-3'] = df['audusd_close'].shift(-3)

# y is points 4 periods into the future - the open price now (not close)
df['y_future'] = df['audusd_close'].shift(-3)
df['y_change_price'] = df['y_future'] - df['audusd_open']
df['y_change_points'] = df['y_change_price'] * 100000
df['y'] = np.where(df['y_change_points'] >= 200, 1, 0)

return df

event_ix_to_plot = None

for norm_method in ['price', 'points', 'percentage', 'minmax', 'stddev']:

df = load_data()
norm_df = normalize_data(df, method=norm_method)

x_train, y_train, x_val, y_val, y_val_change_points = get_train_val(norm_df)

if event_ix_to_plot is None:
valid_events = norm_df[norm_df == 1]
events_to_plot = valid_events.sample(6)
event_ix_to_plot = events_to_plot.index

class_weights = get_class_weights(y_train, display=False)

lr = LogisticRegression(class_weight=class_weights)
lr.fit(x_train, y_train)

errors = show_metrics(lr, x_val, y_val, y_val_change_points, display=False)

chart(norm_df, event_ix_to_plot, norm_method, errors)

The results for each method, plotting the exact same events look like this.

Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (6)
Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (7)

The precisions are roughly the same but the recall does change a little. However, statistics would argue there isn’t really a difference in the precision (so close that “statistical” difference is 0) indicating that normalization makes no difference. However, that’s counter intuitive from our understanding of the math (weight * feature) and much of the written work of it being required.

There are fundamentally two possibilities.

a) the bulk of the scientific and academic community of very smart PhD's are wrong about the need for normalization.
b) Our original hypothesis (that something in the previous 4 periods price predicts a sudden change) is incorrect or poorly correlated.

Obviously the later is far more likely and in this case, correct. We have not yet proven that hypothesis and that’s the bulk of the problem here and we need to consider a different approach (next article).

You should need this type of “dead end” hypothesis testing is normal.

- Develop a hypothesis
- test it
- fail to prove it
- move on

is the typical flow of events and you will do this many many times before you find something that works for you.

The conclusion here is that none of these methods really gets us to where we need to be and only slightly better (or the same) than guessing. Its very unlikely any of these options could form the basis of a successful strategy.

Next article we will make some adjustments to our measurement (which isn’t yet quite correct) and explore the Fx workings a little before we move on to add more features and start to use different machine learning techniques.

  • Github
    https://github.com/the-ml-bull/hello_world
  • Youtube
    https://youtu.be/WcYi0_H9OgI
  • Twitter
    @the_ml_bull
  • Part 1 — Hello World
    https://medium.com/@the.ml.ai.bull/artificial-intelligence-and-machine-learning-for-foreign-exchange-fx-trading-f1e3c3efef78
  • Part 2 — Extending Hello World
    https://medium.com/@the.ml.ai.bull/artificial-intelligence-and-machine-learning-for-foreign-exchange-fx-trading-part-2-extending-4d93347064a2
  • Part 3 — Logistic Regression
    https://medium.com/@the.ml.ai.bull/artificial-intelligence-and-machine-learning-for-foreign-exchange-fx-trading-part-3-lifting-the-1b7c1a24ac1b
Artificial Intelligence and Machine Learning for Foreign Exchange (Fx) Trading Part 4—… (2024)
Top Articles
Latest Posts
Article information

Author: Dong Thiel

Last Updated:

Views: 6287

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Dong Thiel

Birthday: 2001-07-14

Address: 2865 Kasha Unions, West Corrinne, AK 05708-1071

Phone: +3512198379449

Job: Design Planner

Hobby: Graffiti, Foreign language learning, Gambling, Metalworking, Rowing, Sculling, Sewing

Introduction: My name is Dong Thiel, I am a brainy, happy, tasty, lively, splendid, talented, cooperative person who loves writing and wants to share my knowledge and understanding with you.