Cite

Introduction

Many trends usher in period of cloud computing, utilizes web-based growth and mainframe technology. Increasingly affordable and influential processors, coupled through software-as-a-service (SaaS) are converting data centers addicted to large-scale computing suites. The growing network bandwidth and dependable flexible network connections creates it probable for consumer to give to the great-requalify services as data and software that exist at isolated data centres. The complexity of straight hardware management does not consider. Nowadays, Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2) (Kumar and Kumar, 2021) are very famous, these are the pioneers of cloud computing. Although these web-based online services give enormous amount of storage and customizable calculating, this shift in computer platform assumes the dependability of local machines to maintain the data. As a consequence, users are approach to its cloud service access and integrity of its data, for example current Amazon S3 crash (Bello et al., 2021; Jayashri and Kalaiselvi, 2021; Ogwel et al., 2021; Aissaoui et al., 2022; Koushik and Patil, 2022). From significant feature of service quality and data security perspective, cloud computing has unavoidable challenging safety threats. First, conventional cryptographic antiquities cannot be unswervingly considered for the reason that users in cloud computing protect data safety based on controlling data loss. Thus, the verification of the accurate storage of data under cloud should be completed in the nonexistence of clear knowledge of entire data. Assume several sorts of data as every user stored under cloud along with requirement for a continuous long-term guarantee security of its data trouble verifies the accuracy of data storage under cloud. Secondly, cloud computing is not 3rd party data warehouse. To update data on cloud, involving inserting, deleting, modifying, adding, reordering, so on. Therefore, ensuring storage accuracy in dynamic data inform is the utmost consequence. However, this dynamic aspect renders habitual reliability assurance processes of no use and leads with innovative solutions. The Cloud Computing implementation driven through data centers cooperatively as well as dispersed. Each user data are stored redundantly on manifold physical locations to lessen threats to data integrity (Babu and Senthilkumar, 2021). Thus distributed protocols to ensure storage accuracy resolve are utmost significance to accomplish robust and protected cloud data storage under real world. Currently, this significance of ensure integrity of distant data has been highlighted with subsequent research papers (Mythili et al., 2020; Rajesh and Shajin, 2020; Shajin and Rajesh, 2020; Thota et al., 2020; Mishra et al., 2021). While processes may be helpful in ensuring storage correctness in the absence of users owning data, they cannot deal entire security threats under cloud data storage. The researchers also proposed dispersed protocols to make sure storage accuracy across multiple servers. As a consequence, its applicability to cloud data storage may be drastically restricted (Hu et al., 2021).

Hence, these problems are motivated to develop the new methodology to perform the security process in cloud computing process (Alashhab et al., 2021). Thus to enhance the data security, this manuscript develops the novel methodology that can enhance the security level, which are evaluated by introducing Byzantine flaws, malicious data alteration attacks, along with server collusion attacks. The key contributions of this work is mentions as below,

In this manuscript, enhance the security level of the data on cloud computing, the new login process is introduced that can improve the security of the data files.

Owing to safety and secrecy are noteworthy issues in cloud technology, it explores technological developments and practices that can meet this challenge.

After presenting the concerned attacks with threats of cloud computing, it investigates the well organized as well as resistant beside Byzantine flaws, malicious data alteration attacks, along with server collusion attacks.

Compared to numerous predecessors that only give binary outcomes on storage status on dispersed servers as challenge-response protocol under this function also gives data error localization.

Unlike most previous work to ensure remote data, an innovative plan supports safe along with well-organized dynamic functions under data blocks, namely, data update, delete and append.

To analyze the safety with secrecy issues on cloud and mobile cloud systems, this manuscript explores the related attacks and own counter measures to safeguard the systems.

A widespread safety and efficiency performance portrays that the proposed system extremely well-organized as well as resistant beside Byzantine flaws, malicious data alteration attacks along with server collusion attacks.

Finally, this manuscript concludes that the cloud, the mobile cloud computing platforms are appropriate for hosting and analyzing the big data. To threaten the cloud and mobile cloud computing environments, several developed and developing security attacks are existed. Securing big data from these sites should be a new proficient countermeasure.

Rest of this manuscript is organized as follows: The section “Literature review” describes the related recent studies. The section “Proposed methodology” illustrates about the implementation process of the proposed model. The section “Results and discussion” proves the result and discussion, the section 5 concludes the manuscript.

Literature review

Some of recent literatures based on security in cloud computing are discussed below,

Chinnasamy et al. (2021) have presented the hybrid approach that was the combined form of ECC and Blowfish for securing data in cloud computing. Cloud providers typically have trouble ensuring files were protected, as security was the largest issue to handle and transfer data, as they may be retrieved, misused, and demolished under the form of original data. Cloud security was huge concern on cloud computing environment. To protect the cloud environment, numerous investigation papers were suggested. To overwhelmed security problem and accomplish CIA ownership (confidentiality, integrity and obtainability), cryptography was utilized. It has certain restrictions in traditional symmetric and asymmetric. ECC and Blowfish were combined to perform a hybrid approach that was utilized to overcome the draw backs of symmetry and skewness.

Abdullayeva (2021) have presented the machine-encoder-base deep learning methodology to APT attack identification. The gain of the presented method was accomplished a great classification consequence when recognizing composite relation ships among features on database. In addition, the presented method shortens the procedure of categorizing huge count of data by decreasing the data size. Initially, the autocoder neural network was used and revealing characteristics were considered as network traffic data under unsupervised manner.

Velliangiri et al. (2021) have suggested a deep learning-base classifier to detect the DDoS attack. Certain significant characteristics through the log file were chosen for classification with Bhattacharya distance measure to diminish classifier training time. Taylor-Elephant Herd Optimization de pending on Deep Belief Network was evolved by adjusting Elephant Herd Optimization (EHO) through Taylor series, thus the presented algorithm was assumed to train Deep Belief Network (DBN) Detection of DDoS attacks.

Shaikh and Meshram (2021) have presented service and deployment modes as well as essential features of cloud computing, secure and privacy issues on cloud. The cloud computing services were analyzed, such as software as service, platform as service and infrastructure as service. The several vulnerabilities, attacks, and protection mechanisms were given to protect the cloud environment.

Su et al. (2021) presented a decentralized self-auditing system for multiple cloud storage, known as DSAS. Initially, the symmetric balanced imperfect block design, DSAS accomplishes integrity validation through cloud server connections and audit costs were shared through CSs. Second, DSAS may locate misbehaving cloud servers through low compute costs including resist denial-of-service attacks. Third, DSAS may recover despoiled data devoid of getting data. At last, the security test and function assessment portrays that DSAS has inclusive security and functionality and outcomes of experiments portrays that DSAS is efficient.

Existing system

From the point of view of data security, the quality of service has always been a significant factor, as cloud computing unsurprisingly presents novelstimulating security threats for number of reasons.

First of all, typical cryptographic primitives to protect data security cannot be adopted directly by users in cloud computing. So, validation of exact data storage must be carried out without explicit knowledge of the entire data. Numerous types of data are considered for every user saved in the cloud and the demand of long-term continuous assurance of their data safety.

Second, Cloud Computing is not just third-party data warehouse. The data stored in the cloud may be frequently updated by the users, such as insertion, modification, deletion, reordering. Therefore, ensuring correct storage under dynamic data update is utmost significance.

Without users having, these techniques are useful for ensuring the optimal quality of storage, it does not handle all the security threats, because they are all focuses on single server environment and most do not consider dynamic data functions.

Proposed methodology

In this manuscript, an effectual and flexible Distributed Scheme with Explicit Dynamic Data Support is proposed to make sure the accuracy of user data in the cloud. Relies on erasure-encoded under file distribution preparation to deliver redundancies and ensure data reliability. This construction dramatically lessens the communication and storage overhead likened with conventional replication-base file distribution systems. The hemimorphic token through distributed verification of erasure-encoded data, the proposed system attains storage exactness assurance as well as data error localization: when data corruption is noticed through storage correctness verification.

Related to numerous predecessors only deliver binary consequences on storage status on distributed servers, the challenge-response protocol gives data error localization.

Unlike most previous work to make sure remote data integrity, the proposed scheme facilitates safe and well-organized dynamic operations on data blocks.

The performance analysis portrays that the proposed system is more effectual and resilient against Byzantine flaws, malicious data modification attacks.

Modules
Client module

The client sends the query with server. With respect to the query, the server sends the related file to the client. At the server side, it verifies the client name and password of security process. If it is fulfilled, then received the customer queries and search the related files in the database. At last, identify that file and forward to the client. If server identifies the intruder means, it set the alternative path to those intruders. The process of the client module is depicted in Figure 1.

Figure 1

Client module.

System module

The network structure for cloud data storage is depicted in Figure 2.

Figure 2

System architecture.

Here, 3 different network entities can be recognized as follows:

User: Users, who have data to be saved on cloud, and rely on the cloud for data calculation, has individual consumers and organizations.

Cloud Service Provider (CSP): It contains vital resources along with experience under works with cloud storage, owned and direct cloud computing dispersed under the structure. The CSP contain certain inevitability and request a customer to obligate with reticent capacity; the capacity reservation, the cloud service provider shares the risk through cloud service customers. Thus, CSP reduces the risks of initial investment in cloud infrastructure.

Third Party Auditor (TPA): The user is troubled about integrity of data stored on the cloud, as user’s data may be attacked or altered via an external attack. That is why, a new concept named data auditing is suggested that verifies the integrity of the data using an entity named TPA. It is a preferred TPA through experience and expertise and is believed to evaluate and describe on behalf of clients seeking the possibility of cloud storage services.

Cloud data storage module

Here, the consumer stores its data throughout the position of cloud servers, runs at same time as the user communicates through cloud servers during CSP to evaluate or recover data. Users must perform volume-level functions in their data. Users must be security measures in place to make sure consistent validity of its store data, still the absence of local copies. If consumers unavoidably do not contain time, possibility or resources to check its data, it may hand over tasks with preferred trusted TPA of its particular selection. Consider that end-to-end communication channels among every cloud server and consumer are authentic and trusted that may be accomplished within carry out through small overhead.

Cloud authentication server

It works through some extra behaviour included with usual customer authentication protocol. This initial accumulation sends that customer verification information with masked router. The authentication server (AS) under this model operates from ticket influence, control permissions under the request network. The other optional purpose of assisting with AS is inform customer lists, to diminution the customer from the valid customer based on the diminution or request based on the authorization time.

Unauthorized data modification and corruption

The main problems with successfully detecting unauthorized data transfer and corruption is probably based on server compromises and random Byzantine flaws. Also, at dispersed case, while these discrepancies are effectively noticed, it is very important to notice the presence of data error on server.

Adversary module

Safety threats facing CDS may appear as 2 bases. For one thing, CSP may be self-interested, untrustworthy, and probably hateful. Do you want to move data has not been entrance or hardly ever access with minimum level of storage than decided for monetary reasons, to hide an incident of data loss based on administration errors, Byzantine flaws etc.

In contrast, it can also be inexpensively aggravated adversary consists of ability for compromising series of cloud data storage servers at diverse time duration can consequently adjust or remove user data without being detected by CSPs for specified period of time. Particularly, assume two sorts of adversaries through dissimilar skill levels under this document:

Weak Adversary

This adversary is fascinated under humiliating consumer data files. The adversary may contaminate that innovative data files through adjusting or entering their own fraudulent data to avoid user as recovering of original data.

Strong Adversary

This is a very bad situation; consider that the enemy may negotiate entire storage server as data files may be deliberately repaired internally as long as they are trusted. This applies to wherever entire servers cooperate to cover up data loss or corruption.

System architecture

The proposed model procedure is detailed in this section, which is elaborated the DFD client architecture that is given in Figure 3. In this, non-existing user as well as existing user are must enter the particular username and password for identifying the data. If the password is correct means the server is connected with the client, otherwise it is automatically rejected.

Figure 3

System architecture in DFD.

At the same time, user can login to the particular data by using some procedures that are detailed in use case diagram in Figure 4. Here, user and administrator to login the data and then access the resources, which can provide the IP address of the user. At that time, the unwanted user can’t able to access the files because it is blocked by the administrator. The original user can able to find the files and access the data without any interruption.

Figure 4

Use case diagram.

Security analysis

The analysis of DS model with EDDS proposed for data security during journey towards this cloud computing pattern presents that subsequent steps revealed in which data may be very susceptible threats such as data leakage, modification, user privacy and discretion, so on. The proposed DS with EDDS model is intended to address entirely of these security problems proficiently.

Unauthorized server

As data must be conveyed over network with cloud are frequent attacker may simply break addicted to Internet-based network and performance as cloud server owner of the data, thereby results in data loss. To avoid data loss in this situation, EDDS is used in this model. Certification authorities (CAs) issue both certificate is recommendation of online world. The cloud server initially directs that recognizing information to owner. The owner confirms that certificate and directs a message with server directs a numerally signed acknowledgment, allowing encrypted data transfer among browser and server. Furthermore, data and keywords are put in storage in the cloud on an encrypted form.

Byzantine failure

Data security is main problem when consumers trust on third-party services due to potential for Byzantine cloud flaws. Byzantine faults are assumed more hazardous to non-latent faults of cloud computing environment. System components may hurt as malicious failures, generating arbitrary results. These faults are named Byzantine faults. Byzantine faults are hard to notice previously they cause system harm. This work recognizes the Byzantine flaw on cloud computing environment by DS with EDDS, to make sure the robustness of the multi-cloud environment. This attack was detected in the client side by EDDS and removed for secure communication.

Malicious data modification attack

The data modification attack is active attack; it is depend on the intervention of the swapped data. This data may be adapted and removed to modify that understanding of the message and prevent the information from reaching the recipients, eg. in the event of an accident or traffic congestion. Modification is defined as an attack on the original data integrity that means the unauthorized parties not only get access to data but also deceive the data by inducing denial-of-service attacks, like modifying forward data packets or flooding the network with fake data. Thus, the proposed model has effectively detects the fake data in server side, which are tested by asking quires by the user. The login ID and password should be enter by the client side if it is correct means the data will be transmitted otherwise it rejected.

Even server colluding attacks

In this, the proposed scheme detects even the information of the server collusion attacks on temporal with spatial dimensions on distributed mode. The temporal documentation is done in a way that it denotes to sudden changes on correlation map of node. For instance, at a particular time, certain nodes can pass information to opponent, then it continues to function. At dissimilar times, the opponent can obtain information for dissimilar nodes. The correlation may change somewhat and changes may be monitored through certain safety protocols. Also, a node portrays working appropriately and communicating through neighbors, and their confirmation with other nodes. At any given time, the node can pass more information.

In this, the proposed DS with EDDS model is to identify the attacks like byzantine failure, malicious data modification attack, even server colluding attacks. Here, the attack agent as k and every attack agent is mentioned eqn. (1), zki={zk1i,zk2i.zkni} z_k^i = \left\{ {z_{k1}^i,\,z_{k2}^i \ldots \ldots \ldots \ldots \ldots .z_{kn}^i} \right\} where k denotes the attack agent with iteration i and n represent the total collection of data’s. Additionally, the random variable of the model for each attack agent k with iteration i is defined as eqn. (2), ski={zki;agentkisnonfaultygki;agentkisfaulty s_k^i = \left\{ {\matrix{ {z_{k}^i}; & {agent\,k\,is\,non - faulty} \cr {g_k^i;} & {agent\,k\,is\,faulty} \cr } } \right. where the arbitrary d-dimensional random variable for every attack agent (faulty agent of attacks) is denoted as gki g_k^i . Also, the random variable for each iteration is mentioned in eqn. (3), si={ski,k=1,2,,n} {s^i} = \left\{ {s_k^i,k = 1,2, \ldots \ldots \ldots ,n} \right\}

If the proposed model identifies the attack agent in the data then it avoids that node while transmitting the information. Therefore, the proposed DS with EDDS model is effectual and opposed to the attacks of Byzantine flaws, malicious data modification, even server collusion.

Results and discussion

The implementation of proposed DS with EDDS method is done by NetBeans, which is the open source and the integrated development environment (IDE) supports the language Java. The proposed method simulations are run in PC through Intel Core, 2.50 GHz CPU, 8 GB of RAM. Therefore, it may be assumed to be most dangerous stage in accomplishing effective novel system of confidence. The execution phase comprises careful planning, exploration of existing system and their limitations, design of approaches to accomplish change, and assessment of approaches of change. Thus, the outcomes for cloud server login, client side login, admin login and successful login are obtained.

The cloud computing structure has various types of configurable distributed systems along various connectivity and usage. The cloud computing structure has various types of configurable distributed systems along various connectivity and utilization. Due to cost-effective, scalability, reliability and flexibility, the organizations are adapting to cloud networks in rapid pace. Even though the merits of cloud computing are dependable, the cloud networks are susceptible to numerous categories of network attacks as well as secrecy issues. Initially, the authenticated person is tried to login the server by the personal user ID and password that is already used in server. The process of login in the cloud server is detailed in Figure 5.

Figure 5

Cloud server login.

Here, the user can log in to the data and then access the resources, which can provide the user’s IP address. At that time, the unwanted user could not access the files because it was blocked by the administrator. The original user can detect the files and access the data without any interruption. The process of login by the client side is shown in Figure 6.

Figure 6

Client side Login.

The process of admin login is humble; users input its credentials on website’s login form. This information forward with authentication server wherever the information is likened to every consumer credentials on file. The system can validate consumers and grant them access to its accounts when found the competition. The admin login is represented in Figure 7. The successful login is represented in Figure 8.

Figure 7

Admin login.

Figure 8

Successful login.

Cloud storage is defined as cloud computing system that stores data on Internet via cloud computing earner that achieves and operates the database as service. It delivers on demand through timely efficiency and cost, and eliminates buying and managing own data storage infrastructure.

In cloud computing, a secure data depending on distributed system. After attaining the cloud, data can be stored randomly in one or more server. As per the storage mode characteristics, every server may be abstracted as a storage node in the distributed system. The data storage details are exposed on (Figure 9a, b and c).

Figure 9

(a–c) Data storage details.

The intention of this method is to simultaneously attain certain targeted performance that can assure the data secure necessary for specific data users, namely, financial practitioners, auditing professionals. Protecting threats from internal threats: it aims to access high-level safety data storage by separating data across different cloud servers, here internal threats cannot misuse data or retrieve information from saved data on the server. During the process of transmission, the data must be encrypted. Higher proficiency data processing: this system averts higher communication with computation overhead to lessen the latency.

Table 1 shows that the validation and verification results of the proposed model of resilience to the failures and attacks. Here, the proposed approach has achieved better performance in terms of attack level rate, Security level rate, attack detection rate, and classification accuracy.

Validation and verification results of the proposed model of resilience to the failures and attacks.

Performance metrics Byzantine failure Malicious data modification attack Even server colluding attacks
Attack level rate 5.6 7.5 6.3
Security level rate 91.5 89.6 90.8
Classification accuracy (%) 95.7 94 97.6
Attack detection rate 0.91 0.896 0.935
Performance analysis

The performance metrics like attack level rate, security level rate, and classification accuracy for Byzantine failure, malicious data modification attack are calculated through proposed approach. The efficiency of the proposed model is compared with existing models, like, ECC with Blowfish (ECC-BF) (Chinnasamy et al., 2021), Autoencoder with Softmax Regression Algorithm (AE-SRA) (Abdullayeva, 2021), Optimization based Deep Network (ODN) (Velliangiri et al., 2021), SDE with SPIaaS (Shaikh and Meshram, 2021) and Decentralized Self-auditing Scheme with Errors Localization (DS-EL) (Su et al., 2021) techniques.

Classification accuracy

The classification accuracy of proposed method is calculated using eqn. (4), Acc=Tp+TnTp+Tn+Fp+Fn Acc = {{Tp + Tn} \over {Tp + Tn + Fp + Fn}}

Let, Tp refers count of attacks categorized as attacks, Tn denotes count of normal categorized as normal, Fp denotes count of normal categorized as attack and Fn denotes count of attacks categorized as normal.

Attack level rate

Attack level rate is defined as the rate of malicious nodes or devices present in network to the total number nodes present on network, which is calculated using eqn. (5). AL=MnTnMn×100 AL = {{{M_n}} \over {{T_n} - {M_n}}} \times 100 here Mn refers total number of malicious nodes present on network, and Tn is the total quantity of nodes present in the network.

Security level rate

Security level rate is defined as the rate of attack detection by the proposed method that is distinct as the ratio of accurately detect the attacks in the network which is calculated using eqn. (6), SI=TpTp+Fn SI = {{Tp} \over {Tp + Fn}}

Comparative analysis of performance metrics

The performance metrics, like classification accuracy, attack level rate, and security level rate of the proposed model is likened with existing, ECC-BF, AE-SRA, ODN, SDE-SPIaaS and DS-EL models. The performance analyses of proposed model through existing processes are detailed in Table 2.

Performance analysis.

Performance metrics Attack types ECC-BF (Chinnasamy et al., 2021) AE-SRA (Abdullayeva, 2021) ODN (Velliangiri et al., 2021) SDE with SPIaaS (Shaikh and Meshram, 2021) DS-EL (Su et al., 2021) DS with EDDS (Proposed)
Attack level rate Byzantine failure 25 16.5 23.6 19 23 5.6
Malicious data modification attack 23.6 18.95 26.7 21.5 25 7.5
Even server colluding attacks 19.6 15.8 25.7 18.7 21.8 6.3
Security level rate Byzantine failure 73.5 68.4 55.6 58.9 76.8 91.5
Malicious data modification attack 78.6 75.4 59.3 55.7 68.04 89.6
Even server colluding attacks 71.65 71.54 63.6 53.9 73.8 90.8
Classification Accuracy (%) Byzantine failure Malicious data 73 85.6 60.8 78 83 95.7
modification attack 75.7 81.5 70.2 76.5 82 94
Even server colluding attacks 65.4 78.7 73.5 68.5 79 97.6

Here, the attack level rate of the proposed DS with EDDS method has achieved 65%, 53.5%, 62%, 51.4%, and 63.5% lower than existing ECC-BF, AESRA, ODN, SDE-SPIaaS and DS-EL methods for evaluating Byzantine failure. Also, the attack level rate of the proposed DS with EDDS method has achieved 53.5%, 48%, 68.4%, 45.6%, and 63.6% lower than existing methods for evaluating malicious data modification attack. Moreover, the attack level rate of the proposed DS with EDDS method has achieved 61%, 52%, 65%, 55%, and 63.6% lower than existing methods for evaluating even server colluding attacks.

Subsequently, the security level rate of the proposed DS with EDDS method has achieved 26%, 33%, 48%, 45%, and 32% higher than existing ECCBF, AE-SRA, ODN, SDE-SPIaaS and DS-EL methods for evaluating Byzantine failure. Also, the security level rate of the proposed DS with EDDS method has achieved 29%, 18%, 47.5%, 54%, and 32.8% higher than existing methods for evaluating malicious data modification attack. Moreover, the security level rate of the proposed DS with EDDS method has achieved 28%, 33%, 17.5%, 43%, and 23.8% higher than existing methods for evaluating even server colluding attacks.

Additionally, the classification accuracy of proposed DS with EDDS system has achieved 44%, 10%, 53%, 32%, and 12.4% higher than existing ECCBF, AE-SRA, ODN, SDE-SPIaaS and DS-EL methods for evaluating Byzantine failure. Also, the classification accuracy of the proposed DS with EDDS method has achieved 32%, 15%, 33.4%, 28%, and 13.7% higher than existing methods for evaluating malicious data modification attack. Moreover, the classification accuracy of the proposed DS with EDDS method has achieved 45.7%, 32%, 38.6%, 42.4%, and 30% higher than existing methods for evaluating even server colluding attacks.

Conclusion

In this manuscript, issue of data safety under cloud data storage is primarily dispersed storage is studied. The accuracy of user data under cloud data storage introduced well-organized and flexible dispersed system through clear dynamic data, involving block inform, erase and append. The eraser correction code under file distribution is prepared to give idleness balance vectors and make sure data reliability. Hemimorphic token through dispersed authentication of erased encoded data, program storage accomplishes combination of proper insurance and data error, while data corruption is noticed when the distribution service is properly verified. Through comprehensive safety and performance display that system is extremely well-organized as well as resistant to Byzantine flaws, cruel data modification attacks, still server collusion attacks. The security of data storage under cloud computing enormously significant and challenging area that is still in early level and many research problems are not yet recognized. Various probable directions for future investigation are carried out in this area. Public verification supports TPA to check cloud data storage in the absence of demanding time, possibility or resources. An attractive question is whether public verification and dynamic data storage can create a plan to fulfil the right warranty. In addition, a brief data error investigation into vibrant cloud data programs to inspect that the issue of localization.

eISSN:
1178-5608
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Engineering, Introductions and Overviews, other