Human Activity Recognition – Data Science Projects

Human Activity Recognition – Data Science Projects

Human Activity Recognition (HAR) is a rapidly evolving field that involves the development of algorithms and models capable of interpreting and understanding human activities based on sensory inputs. HAR leverages sensor data, such as accelerometer readings, camera images, and audio recordings, to analyze and infer different types of human activities. By employing advanced machine learning and pattern recognition techniques, HAR systems can automatically identify and categorize activities such as walking, running, sitting, and more.

Human Activity Recognition: Significance

In the modern era, HAR has gained tremendous significance and relevance across various domains. HAR technology has the potential to revolutionize sectors such as healthcare, sports, security, and human-computer interaction. By accurately monitoring and interpreting human activities, HAR systems can provide valuable insights and enable a range of applications.

In healthcare, HAR technology plays a crucial role in monitoring physical activities, exercise routines, and overall well-being. It can aid in tracking movements, providing feedback on posture and gait analysis, and assisting in rehabilitation exercises. Such capabilities have the potential to enhance patient care, assist in personalized treatment plans, and promote a healthier lifestyle.

Similarly, in sports and fitness, HAR technology offers detailed insights into athletes’ performance, technique, and injury prevention. By analyzing movements and tracking activity levels, HAR systems can provide athletes with valuable feedback and personalized training recommendations, leading to improved performance and reduced risk of injuries.

In the domain of security and surveillance, HAR technology proves to be an invaluable tool. It enables real-time detection and classification of suspicious or abnormal activities, enhancing public safety and aiding in securing sensitive areas. HAR systems can analyze sensor data from surveillance cameras and other sources to identify potential threats and trigger appropriate responses.

Understanding Human Activity Recognition

Human Activity Recognition (HAR) is a field of study focused on developing algorithms and models that enable computers to interpret and understand human activities based on sensory inputs. HAR systems analyze data collected from various sensors, such as accelerometers, gyroscopes, magnetometers, cameras, and microphones, to extract meaningful insights about human activities. The core principles of HAR involve capturing, processing, and interpreting sensor data to recognize and categorize different types of human activities.

Human Activity Recognition: Sensor Data Role

Sensor data plays a crucial role in HAR by providing the necessary information about human activities. Sensors capture different aspects of human behavior, such as body movements, sound, and environmental factors. For example, accelerometers measure changes in velocity and acceleration, enabling the detection of walking, running, or sitting. Cameras capture visual information that helps identify gestures or actions. By combining data from multiple sensors, HAR systems can obtain a comprehensive view of human activities, improving recognition accuracy and robustness.

Human Activity Recognition: Machine Learning and Pattern Recognition Overview

Machine learning and pattern recognition techniques form the backbone of HAR systems. These techniques enable HAR models to learn and recognize patterns in sensor data, leading to accurate activity recognition. Commonly used techniques include:

  1. Supervised Learning: In this approach, HAR models are trained on labeled datasets, where each activity is associated with a specific label. Algorithms such as Support Vector Machines (SVM), Random Forests, or Neural Networks learn to classify activities based on the provided labels.
  2. Unsupervised Learning: Unsupervised learning techniques aim to discover hidden patterns or structures in the data without prior labeling. Clustering algorithms, such as k-means or hierarchical clustering, can group similar activities together without explicit supervision.
  3. Deep Learning: Deep learning, specifically Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), has shown remarkable performance in HAR. CNNs excel at extracting spatial features from sensor data, while RNNs capture temporal dependencies in sequential data, making them suitable for recognizing activities over time.
  4. Feature Extraction: HAR models often rely on feature extraction techniques to transform raw sensor data into a more compact and meaningful representation. Feature extraction methods, such as Fourier Transform, Principal Component Analysis (PCA), or Wavelet Transform, can highlight relevant characteristics for activity recognition.

Applications of Human Activity Recognition

Following are some applications of human activity recognition.

Healthcare and Wellness

  1. Monitoring physical activities and exercise routines: Human Activity Recognition (HAR) technology is widely used in healthcare to monitor and track the physical activities and exercise routines of individuals. HAR systems can provide valuable insights into the duration, intensity, and frequency of activities, helping individuals and healthcare professionals gain a better understanding of their overall fitness levels and adherence to exercise regimens.
  2. Posture analysis, gait analysis, and fall detection: HAR systems play a crucial role in analyzing posture and gait patterns. By utilizing sensor data, such as accelerometers and gyroscopes, HAR can assess body movements, identify deviations from normal patterns, and detect potential issues with posture and gait. This capability is particularly beneficial for assessing and addressing musculoskeletal conditions and helping prevent falls among the elderly population.
  3. Rehabilitation exercises and personalized feedback: HAR technology can assist in rehabilitation programs by providing real-time feedback on exercise performance and technique. By monitoring and analyzing movements during rehabilitation exercises, HAR systems can offer personalized feedback, ensuring proper form and maximizing the effectiveness of rehabilitation sessions. This promotes faster recovery and improved outcomes for patients.

Sports and Fitness

  1. Performance tracking and analysis for athletes: HAR has revolutionized the sports and fitness industry by enabling detailed performance tracking and analysis for athletes. By capturing and analyzing data on movements, biomechanics, and performance metrics, HAR systems provide valuable insights into an athlete’s technique, efficiency, and areas for improvement. This information can be used to enhance training programs and optimize performance.
  2. Injury prevention and technique improvement: HAR technology plays a vital role in injury prevention by identifying movement patterns or techniques that may put athletes at risk. By analyzing data from sensors, such as accelerometers and motion capture devices, HAR systems can detect potentially harmful movements and provide real-time feedback to correct techniques and prevent injuries. This proactive approach to injury prevention is instrumental in maintaining athletes’ health and well-being.
  3. Personalized training recommendations: HAR systems can analyze an individual’s activity data and provide personalized training recommendations. By considering factors such as fitness levels, goals, and previous performance, HAR can suggest tailored exercise routines, intensity adjustments, and training progressions. These recommendations help athletes optimize their training and achieve their goals efficiently.

Security and Surveillance

  1. Real-time detection and classification of suspicious activities: HAR technology enhances security and surveillance systems by enabling real-time detection and classification of suspicious activities. By analyzing sensor data from surveillance cameras and other sources, HAR systems can automatically identify and flag activities that may pose a threat. This capability aids in proactive security measures and prompt response to potential security breaches.
  2. Enhancing public safety and securing sensitive areas: HAR systems contribute to public safety by monitoring and analyzing human activities in public spaces. By identifying abnormal or suspicious behaviors, HAR can help identify potential threats, prevent crimes, and ensure the safety of individuals. Furthermore, HAR technology assists in securing sensitive areas by monitoring access points and detecting unauthorized or suspicious activities, strengthening overall security measures.

Human-Computer Interaction

  1. Gesture recognition and touchless control: HAR enables intuitive human-computer interaction by incorporating gesture recognition and touchless control. By analyzing hand movements and gestures, HAR systems can interpret user intentions, allowing for touchless control of devices and interfaces. This technology is particularly useful in scenarios where physical touch is impractical or unsanitary, enabling a more hygienic and user-friendly interaction.
  2. Sign language translation and immersive user experiences: HAR technology has the potential to bridge communication gaps by translating sign language into text or spoken language. By capturing and analyzing hand movements and gestures, HAR systems can interpret sign language and facilitate communication between individuals who are deaf or hard of hearing and those who are not. Additionally, HAR enhances immersive user experiences by recognizing and responding to natural human gestures, providing a more intuitive and engaging interaction with technology.

The applications of Human Activity Recognition are vast and impactful, ranging from healthcare and sports to security and human-computer interaction. With its ability to monitor activities, analyze movements, and provide personalized feedback, HAR technology continues to transform various domains, ultimately improving human lives.

Challenges in Human Activity Recognition

In addition to this, you can face the following challenges in human activity recognition.

Data variability and its impact on recognition accuracy

One of the primary challenges in Human Activity Recognition (HAR) is the inherent variability in human activities. Activities can differ significantly in terms of movement patterns, execution styles, environmental conditions, and individual characteristics. This variability poses a challenge in developing robust recognition models that can accurately and consistently identify diverse activities. Researchers need to account for these variations and develop HAR systems that can adapt to different scenarios to improve recognition accuracy.

Real-time processing requirements and latency issues

Many HAR applications require real-time processing capabilities to provide instantaneous feedback or response. Real-time HAR is crucial in domains such as healthcare, sports, and security. However, achieving low-latency processing while maintaining high accuracy can be challenging, particularly in resource-constrained environments. HAR systems need to strike a balance between processing speed and recognition accuracy to ensure timely and reliable results.

Sensor placement and calibration for reliable data capture

Accurate sensor placement and calibration are essential for reliable data capture in HAR. The quality and reliability of sensor data significantly impact the accuracy of activity recognition. Ensuring consistent and precise sensor positioning across different individuals and activities can be challenging. Moreover, variations in sensor accuracy and calibration can introduce errors and inconsistencies, leading to reduced recognition performance. Overcoming these challenges requires careful calibration procedures, standardized sensor placements, and advanced calibration techniques.

Scalability and generalization across diverse populations

HAR models need to be scalable and capable of recognizing activities across diverse populations. Recognizing activities accurately for individuals with different physical attributes, demographics, and cultural backgrounds is crucial. However, achieving high recognition performance across diverse populations can be complex due to variations in movement patterns, body types, and cultural nuances. Developing robust and generalizable models that can adapt to different individuals and populations is an ongoing research focus in HAR.

Also Try: House Price Prediction Project in Python – Data Science Projects

Addressing these challenges is crucial for advancing the field of Human Activity Recognition. Researchers and practitioners continue to explore innovative approaches, including data augmentation, transfer learning, and adaptive models, to improve recognition accuracy, real-time processing capabilities, sensor reliability, and scalability in HAR systems.

By overcoming these challenges, HAR technology can unlock its full potential, enabling more accurate, reliable, and versatile recognition of human activities across various applications and benefiting individuals, industries, and society as a whole.

Advancements in Human Activity Recognition

In addition to this, there are some Advancements in Human Activity Recognition.

Deep learning approaches for improved recognition accuracy

Deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have significantly enhanced the accuracy of Human Activity Recognition. These advanced algorithms can automatically learn complex features and capture temporal dependencies, leading to improved recognition accuracy across a wide range of activities.

Sensor fusion techniques to enhance reliability

Sensor fusion, combining data from multiple sensors such as accelerometers, gyroscopes, and magnetometers, enhances the reliability of Human Activity Recognition. By integrating information from different sensors, HAR systems can better capture and analyze various aspects of human activities, resulting in more robust and accurate recognition.

Edge computing and on-device processing for real-time applications

The adoption of edge computing and on-device processing has transformed Human Activity Recognition in real-time applications. By deploying HAR models directly on devices such as smartphones, wearables, and IoT devices, real-time processing is possible without relying heavily on cloud infrastructure. This approach ensures privacy, reduces latency and enables instantaneous feedback or response.

These advancements in deep learning, sensor fusion, and edge computing have significantly pushed the boundaries of Human Activity Recognition, enabling more accurate and reliable recognition, enhanced data fusion, and real-time processing capabilities. These advancements continue to propel the field forward, opening up new possibilities for HAR in various domains and applications.

Human Activity Recognition: Code

You can find the tutorial and dataset of this project on the GitHub page.

1: Importing The Modules

import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
import seaborn as sns
from keras.models import Sequential
from keras.layers import Dense,Dropout
from kerastuner.tuners import RandomSearch
from kerastuner.engine.hyperparameters import HyperParameters
import os
import warnings
from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch
warnings.filterwarnings('ignore')

2: Reading The Data

train_data = pd.read_csv('../input/human-activity-recognition-with-smartphones/train.csv')
test_data = pd.read_csv('../input/human-activity-recognition-with-smartphones/test.csv')

print(f'Shape of train data is: {train_data.shape}\nShape of test data is: {test_data.shape}')

3: Some Analysis

pd.set_option("display.max_columns", None)
train_data.head()
train_data.columns
train_data.describe()
train_data['Activity'].unique()
train_data['Activity'].value_counts().sort_values().plot(kind = 'bar', color = 'pink')

4: Preparing Train And Test Data

x_train, y_train = train_data.iloc[:, :-2], train_data.iloc[:, -1:]
x_test, y_test = test_data.iloc[:, :-2], test_data.iloc[:, -1:]
x_train.shape, y_train.shape
x_test, y_test = test_data.iloc[:, :-2], test_data.iloc[:, -1:]
x_test.shape, y_test.shape
le = LabelEncoder()
y_train = le.fit_transform(y_train)
y_test = le.fit_transform(y_test)
x_test.shape, y_test.shape, x_train.shape, y_train.shape
scaling_data = MinMaxScaler()
x_train = scaling_data.fit_transform(x_train)
x_test = scaling_data.transform(x_test)

5: Creating A Base Model

model = Sequential()
model.add(Dense(units=64,kernel_initializer='normal',activation='sigmoid',input_dim=x_train.shape[1]))
model.add(Dropout(0.2))
model.add(Dense(units=6,kernel_initializer='normal',activation='softmax'))
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size = 64, epochs= 10,validation_data = (x_test,y_test))

6: Hypertuning The Model

def build_model(hp):
    model = keras.Sequential()
    for i in range(hp.Int('num_layers', 2, 25)):
        model.add(layers.Dense(units = hp.Int('units' + str(i), min_value=32, max_value=512, step=32),
                               kernel_initializer= hp.Choice('initializer', ['uniform', 'normal']),
                               activation= hp.Choice('activation', ['relu', 'sigmoid', 'tanh'])))
    model.add(layers.Dense(6, kernel_initializer= hp.Choice('initializer', ['uniform', 'normal']), activation='softmax'))
    model.add(
            Dropout(0.2))
    model.compile(
        optimizer = 'adam',
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy'])
    return model


tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials= 5,
    executions_per_trial=3,
    directory='project', project_name = 'Human_activity_recognition')

tuner.search_space_summary()
tuner.search(x_train, y_train,
             epochs= 10,
             validation_data=(x_test, y_test))
tuner.results_summary()
model=tuner.get_best_models(num_models=1)[0]
history = model.fit(x_train,y_train, epochs=51, validation_data=(x_test,y_test))

model.summary()
import tensorflow as tf

from tensorflow import keras

Callback = tf.keras.callbacks.EarlyStopping(monitor='accuracy', patience=3)
mo_fitt = model.fit(x_train,y_train, epochs=200, validation_data=(x_test,y_test), callbacks=Callback)
accuracy = mo_fitt.history['accuracy']
loss = mo_fitt.history['loss']
validation_loss = mo_fitt.history['val_loss']
validation_accuracy = mo_fitt.history['val_accuracy']


plt.figure(figsize=(15, 7))
plt.subplot(2, 2, 1)
plt.plot(range(5), accuracy, label='Training Accuracy')
plt.plot(range(5), validation_accuracy, label='Validation Accuracy')
plt.legend(loc='upper left')
plt.title('Accuracy : Training Vs Validation ')



plt.subplot(2, 2, 2)
plt.plot(range(5), loss, label='Training Loss')
plt.plot(range(5), validation_loss, label='Validation Loss')
plt.title('Loss : Training Vs Validation ')
plt.legend(loc='upper right')
plt.show()

After hyper tuning the model, it is found that 4 layers should be there. The final accuracy achieved is 0.9518.

Human Activity Recognition: Conclusion

Throughout this article, we have explored the fascinating world of Human Activity Recognition (HAR). We started by understanding the definition and core principles of HAR, highlighting the role of sensor data and the utilization of machine learning techniques. In addition to this, we then delved into the diverse applications of HAR, including healthcare and wellness, sports and fitness, security and surveillance, and human-computer interaction. We discussed the challenges in HAR, such as data variability, real-time processing requirements, sensor placement, and scalability. Finally, we explored the advancements in HAR, including deep learning approaches, sensor fusion techniques, and the integration of edge computing for real-time applications.

Emphasis on the evolving nature of HAR and its potential for the future

Human Activity Recognition is a field that continues to evolve and push boundaries. As technology advances and new research emerges, HAR holds immense potential for the future. The increasing accuracy of deep learning approaches, the reliability of sensor fusion techniques, and the rise of edge computing contribute to the continuous growth and improvement of HAR. With ongoing advancements, HAR has the potential to transform various domains, revolutionizing healthcare, sports, security, and human-computer interaction.

Encouragement to explore and leverage HAR in various domains

As we conclude, it is essential to encourage researchers, practitioners, and enthusiasts to explore and leverage the power of Human Activity Recognition in various domains. The applications and benefits of HAR are vast and diverse. By harnessing HAR technology, we can enhance healthcare outcomes, optimize sports performance, improve security measures, and create more intuitive and immersive user experiences. It is an exciting time to be a part of the HAR community. We can unlock its full potential and shape a future where HAR becomes an integral part of our everyday lives.

In summary, Human Activity Recognition is a dynamic and promising field that combines the realms of AI and human behavior analysis. With its applications, challenges, advancements, and potential for the future. HAR opens up new possibilities for improving human lives and shaping a more connected and intelligent world. Let us continue to explore, innovate, and leverage HAR to realize its transformative capabilities across various domains.


Leave a Comment