With the development of the Internet of Things, the amount of data generated has become very large. And traditional servers are no longer able to meet the requirements of this data and processing. To support these growing requirements, cloud computing has emerged, which is characterized by a large processing and storage capacity in addition to its distributed environment. Despite the huge potential of cloud computing in terms of fast processing capacity, storage, and the ability to schedule tasks effectively, it suffers from the problem of bandwidth usage, especially with the large amount of data generated by IoT applications. The emergence of fog computing was a solution to these problems. The task scheduling and resource allocation in fog computing is a very important challenge. Here, there are two main challenges that must be taken into account. First, specify the terminal server that will be assigned to serve each task. Second, the task scheduling algorithm is used in the terminal server.On the one hand, the reason for moving computing to the edge is to reduce the delay for applications that are time-sensitive, such as smart driving and healthcare applications, and the tasks of these applications must be given priority over other applications that are more tolerant of response time. This poses an additional challenge in scheduling tasks for peripheral computing. Identification of latency-sensitive and more tolerant applications that can send their tasks to the cloud rather than the terminal server serving them. In conclusion, using several classification algorithms like Random Forest (RF), Decision Tree (DT), and Multi-Layer Perceptron (MLP) may handle a problem with many classes efficiently. By cleaning, splitting, and training the models, we can evaluate their performance. We measure their accuracy, precision, and recall.