Optimal representation to High Order Random Boolean kSatisability via Election Algorithm as Heuristic Search Approach in Hopeld Neural Networks

This study proposed a hybridization of higher-order Random Boolean kSatisfiability (RANkSAT) with the Hopfield neural network (HNN) as a neuro-dynamical model designed to reflect knowledge efficiently. The learning process of the Hopfield neural network (HNN) has undergone significant changes and improvements according to various types of optimization problems. However, the HNN model is associated with some limitations which include storage capacity and being easily trapped to the local minimum solution. The Election algorithm (EA) is proposed to improve the learning phase of HNN for optimal Random Boolean kSatisfiability (RANkSAT) representation in higher order. The main source of inspiration for the Election Algorithm (EA) is its ability to extend the power and rule of political parties beyond their borders when seeking endorsement. The main purpose is to utilize the optimization capacity of EA to accelerate the learning phase of HNN for optimal random k Satisfiability representation. The global minima ratio (mR) and statistical error accumulations (SEA) during the training process were used to evaluate the proposed model performance. The result of this study revealed that our proposed EA-HNN-RANkSAT outperformed ABC-HNNRANkSAT and ES-HNN-RANkSAT models in terms of mR and SEA. This study will further be extended to accommodate a novel field of Reverse analysis (RA) which involves data mining techniques to analyse real-life problems. DOI:10.46481/jnsps.2021.217


Introduction
The central motivation of many types of research carried out in the field of, machine learning, decision science and artificial neural networks (ANNs) is the large structure of their training stages. Many ANNs' complex training stage provide a powerful mechanism for solving optimization problems such as decisionmaking or classification tasks. Neural networks are used extensively in many fields of study that originated in mathematical neurobiology. This network attempts to simulate human brain capabilities. It has been utilized since the last decade as a theoretically sound alternative to traditional statistical models. The classification based on the neural networks model becomes efficient when applied in a hybrid framework with many forms of predictive models [16]. The inception of neural networks (ANNs) was initiated as variety of capable networks that act as a useful tool for specific tasks such as recognition problem [17], data mining [18], forecasting problems [21], prediction problems [14]. There have been many recent developments in an attempt to assemble different structures in refining the existing ANN models by integrating them with proficient searching techniques to intensify the quality of the standalone ANN framework [1].
Hopfield neural network (HNN) is a type of recurrent artificial network that mimics the operating capacity of human memory. The ability of HNN to manage nonlinear complex patterns by its training and testing capability is particularly useful for interpreting complex real-life computer science communities [8]. In this work, a novel and powerful heuristics search technique is known as Election algorithms has been explored for effective training of Hopfield neural network model. Election algorithms has been utilized in solving various mathematics and engineering benchmark problems which yield successful performance in both convergence rate and better identification of global optima. We can infer that the application of employing Election algorithms is endless; from industrial planning, scheduling, decision making and machine learning. Election algorithms have been utilized in [11] successfully for predicting groundwater level [12]. Recently, an election algorithm has been proposed in [20] to accelerate the learning phase of Hopfield neural network for random 2satisfiability. The performance of Election algorithms in solving the travelling salesmen problem has been investigated in [5]. Although Election Algorithm has been applied in various optimization areas, it recorded tremendous achievement in searching for the optimal solution to various combinatorial optimizations problems. In our work, Election algorithms has been applied on the trainig phase of hopfield network model in accelerating the training process for optimal representation (global minimum) solutions to high order logic (RANkSAT for k ≤ 3).
However, we did not come across any work that combined the global and local searching capacities of Election algorithms in enhancing the training phase of Hopfield neural network in searching for optimal representation of high order logic programming (RANkSAT). Therefore, a new hybrid computational technique has been proposed by integrating hybrid Election algorithm in the learning process of Hopfield neural network for optimal RANkSAT logical representation in achieving a efficiency, robustness and better accuracy. The contributions of this work include; (1) upgrading RAN2SAT to RAN3SAT; (2) implementing the new logical rule in the Hopfield neural network (RANkSAT-HNN);(3) incorporating Election algorithm (EA) in the training phase of the Hopfield neural network (HNN) for optimal random kSatisfiability and (4) performing comparison of the Election algorithm with other state-of-the-art searching methods in Hopfield neural network for Random kSatisfiability representation. The present work investigate the effectiveness of the Election algorithms in Hopfield neural network for RANkSAT will be explored by comparing its performances to the state-of-the-art model. By developing an effective intelligent working model based on Artificial neural networks, the proposed hybrid computational model will be useful to various computational science and optimization communities by proving an alternative approach in doing computation. The structure of this work is as follows. In section 2, we present the proposed methods involving the mapping of RANkSAT as a new logical rule in Hopfield neural network and a heuristic search method in the Hopfield neural network learning phase via Election algorithms. Section 3 reports the models' performance evaluation metrics. The experimental results of their discussions are reported in section 4. The last Section concludes the paper.

Random kSatisfiability (k ≤ 3)
Propositional satisfiability Logic can be perceived as a logical rule that consists of clauses that contain literals or variables. RANkSAT belongs to the family of non-systematic Boolean logical clauses in which each logical clause involves a random number of literals [13]. The general description for the formulation F RANkS AT is presented in Equation (1) as follows.
, ¬ψ i are defines literals and their negation in logical clauses respectively. In this work, we refer to F r it as a Conjunctive Normal Form (CNF) formula in which the logic clauses in RANkSAT logic are chosen uniformly, independently among all 2 x d + j + t y without replacement a logic clauses of lengthx.
We refer to the optimal mapping y (F x ) → [1, −1] as a logical interpretation expressed as 1 for (TRUE) and -1 otherwise. Logically, there are many formulations to RANkSAT logical clauses, one of the formations taking into account k ≤ 3 is presented as follows.

Random kSatisfiability in Hopfield Neural Network
Hopfield neural network (HNN) belongs to the family of a recurrent neural network that mimics the human biological brain [15]. Its structure consists of interconnected neurons and a powerful feature of content addressable memory that is crucial in solving various optimization and combinatorial tasks [13]. The network is composed of organised neurons of which is portrayed by a variable called the Ising variable. In bipolar recognition, the neurons in discrete HNN are used whereby S i ∈ [1, −1]. The fundamental overview of neuron state activation in Hopfield neural networks is shown in Equation (4) below, where W i j is the synaptic weight vector from starting from j neuron to i neuron. We definedS i as the state of the neuron i in HNN and ς is the predefined value. The value of ς = 0.001 has been specified in [6], [3], [20] to certify that the network's energy decreases to zero. The synaptic weight connection in the discrete HNN contains no connection with itself, the synaptic connection from one neuron to other neurons is zero. The model holds symmetrical features in terms of architecture. HNN model has similar intricate details to the Ising model of magnetism as described in [15].
where h i is the local field that connects all neurons in HNN. The sum of the field is induced by each neuron state as follows, The task of the local field is to evaluate the final state of neurons and generate all the possible 3-SAT induced logic that was obtained from the final state of neurons. One of the most prominent features of the HNN network is the fact that it always converges. The generalized fitness function F RANkS AT regulates the synaptic combinations in HNN and F RANkS AT is presented as follows.
where V and NN are the number variables and the number of neurons generated in F RANkS AT respectively. We defined the inconsistency of F RANkS AT representation as follows.
The value ofE F RANkS AT is proportional to the value of "inconsistencies" of the logical clauses. The rule for updating the neural state is, Equation (9) represents the Lyapunov energy function in HNN for 3 rd order logic.
Equation (10) has been applied to classify whether a solution is a global or local minimum energy. The HNN will generate the optimal assignment when the induced neurons state achieved global minimum energy. There are limited works to combine HNN and EA as a single computational network. Thus, the robustness of Election algorithm manage in improving the training process in Hopfield model. Consequently, the quality of the final neuronal state can be maintained according to Equation (11) as utilized in [6], [4], [3], [20] as follows.

Election Algorithm as Heuristic Search in Hopfield Learning Phase
The Election Algorithm (EA) is an iterative population-based algorithm that provides a varieties of solutions in the search space. The local search function have been partitioned into search spaces. The optimization procedure is inspired based on the voting process in human society [11], [10]]. Generally, the population of an individual is divided into two parties which later carry out a series of operations such as initialization, eligibility stages, advertisement and alliance. These stratgiies are resulted in a stronger searching technique. The primary goal of EA is to encourage candidates to coverge on a global minimum solution (best solution) to optimzation problem [10]. Optimization process based on standard Hopfield model has a high probability of becoming caught at a suboptimal solution with the number of neurons firing into the network [3], [9], [7].
Various metaheuristics algorithms, such as Electin algorithm, have been purposefully used in HNN to improve its searching capabilities and to solve the issue of premature convergence before achieving the global optimal, which will maximize the number of fulfilled clauses during the network's training process. Because of its ability to merge local search into a partitioned search area, EA used mechanisms such as a constructive marketing approach, a negative campaign, and an alliance to maximize the entire searching space. The main steps of the procedure in the EA-HNN-RANkSAT model considering k ≤ 3 is presented from stage 1 to 5 as follows: Stage 1:Initialization The main component of any search and optimization is to find the right solution in terms of the problem's variable's parameter. It is created an array of vector parameters to be optimized. A population of this size N POP as a starting point of T for each solution were generated. Each solution is distributed within the vector boundary range depending on the variable as follows, Where v ∈ N and τ v described the location of the vth supporter the N var −− dimensional search space and N POP described to the number of search representative. A random number with a uniform distribution is defined as τ 1 ∈ [0, 1]. This problem searches for the optimal RANkSAT clauses.

Stage 2:Eligibility measurement
Everyone's eligibility(fitness) function is measured based on the F RANkS AT clauses in using as follows.
where f τ v denoted as eligibility of each person in search spaces τ v , C (k) i is the clause in F RANkS AT and t, j, d ∈ [1, N] are the total number of logical clauses in F RANkS AT logic.
Stage 3: Creating an initial population A population of persons (individuals) P C o f N pop search agent was employed in EA. Each solution represents the eligibility (fitness) of a candidate e δ C and the general political parties (P C ) represent search space. Individuals' populations are splitting population of individual N pop into political parties (P C ) is part of the EA policy. Each P C includes the contender (δ c ) and their followers(τ v )which served as search agents in the solution space. The δ c together with their τ v form some P C . They τ v are divided δ c based on e δ c in which the initial number τ v of a δ c is proportionate to e δ c . According to [10] the τ v of a δ c , are identified by normalizing e δ c computed as follows.
where I = δ c j | j ∈ δ c , e δ C i , α i defined the eligibility of candidate and normalized eligibilities of the candidate δ C i respectively and δ C designated the initial number of candidates in the solution space. The number of an individual to serve as initial candidates δ C i was modelled as follows.
The original number of δ τ v i is modelled as follows.
Then, we randomly select δ τ v i of the τ v and add to δ C i which form an P C in the search space. Stage 4: Campaign strategy: ' This stage is modelled from step 1 to step 3 as follows Step 1: Positive advertisement (ϑ) In the modelling of EA ϑ. We select τ v a position that is in form of a variable of δ C in the search space. The intention of sampling the random numbers is to choose a τ v 1 location to be replaced by a new voter τ v 2 . The method for determining the selection rate λ τ ∈ [0, 1]. The number of variables transferred to the τ v by the δ C is defined according to the following.
The random choice variables in the problem space that must be replaced are signified by ψ τ . The selection rate is defined by λ τ . S C signifies the total number of δ C in the problem space. To transfer τ v to another δ C , the eligibility distance coefficient (e d ) was used model as follows, where e δ C i is utilized to described the eligibility of δ C i and e δ τ v i has been used to represent the eligibility of δ τ vi voter in the search space.
Step 2: Negative advertisement (ω) EA algorithm uses negative campaign operation (ω)as a search mechanism in their opposition movement, to fascinate members of other parties This leads to the marginalized parties' revival and deterioration of progress in the following ways.
where R j ∈ [0, 1] and ϕ is the negative advertisement constant in EA.
Step 3: Coalition strategy(C L ) In Election algorithm, two or more parties come together for a new party, sharing the same ideas and aims in space to find solutions; confederates if they have the same ideas; EA, two or more parties may sometimes come together for a new party, possessing the same ideas and aims in space to find solutions. As a result, some applicants are exiting the advertisement with a new nominee dubbed "leader," while the candidate who withdrew from the election arena is dubbed "follower." To rule the optimum solution search in the search space, EA employs a coalition operation. By building trial vectors using elements of existing party candidates in the solution space and improving the search space, the coalition strategy can effectively gather information about effective party mergers.
where τ v i defined as a coalition parameter of political parties and λ τ ∈ [0, 1] served as a scaling factor and i is an index of current solution During the coalition process(C L ). In EA, a population of solution vectors is randomly created at the start. In Election algorithm, each fresh solution achieved will compete with a united party in the search space.

Stage 5:Stopping condition (Election Day)
At the start of EA, a population of the optimization algorithm is generated at random. In the Election algorithm, each new solution will participate in the search space with a unified group [11].

Simulation Procedure
Implementation of the Neuro-Heuristic searching method of RANkSAT in HNN. The program's main task is to find the best "model" that find the optimal occurrences of random-kSAT. Both logical variables and clauses were initially randomized. Simulations were executed by manipulating a different number of neurons complexity ranging from 10 ≤ NN ≤ 110. The simulation has been conducted on RANkSAT as a logical clause in HNN according to the flowchart in Figure 1.

Performance
The efficiency of performance of the EA-HNN-RANkSAT model has been quantified based on Global minimum ratio (mR), statistical error measurement based on the sum of square error (SSE) and mean absolute error (MAE) as well as model computational time (CT) presented in Equation (20) to Equation (23) respectively.
where f NN and h d characterized the HNN the output and the target output values respectively, d categorized the number of permutations in HNN.  Figure 2 until Figure 4 displayed the performance of RANkSAT in HNN considering the third order of k ≤ 3 in terms of mR, errors accumulations and time-consuming during the program execution. In our experiments, we explore the performance of the proposed training approach using a different number of neurons10 ≤ NN ≤ 110. The general trend of the model performance indicates a massive increase in errors accumulation and CPU time considering the complexity of the neuron fired to HNN in searching for optimal RANkSAT representation. The increasing trend in error behaviour shows the complexity of the neuron states of RANkSAT which proved to be an NP problem [7], [9]. According to SSE and MAE in Figure4 and Fig 5 respectively in measuring the HNN performance during the learning phase in searching for correct optimal assignment to RANkSAT logical clauses, the proposed method, EA-HNN-RANkSAT, was able to achieve E F RANkS AT = 0, with lower statistical errors accumulation than ABC-HNN-RANkSAT. This may be due to the multiple optimization layers involve in EA which has a better screening stage in searching for optimal assignment leading to E F RANkS AT → 0in fewer iterations than ABC-HNN-RANkSAT.This explores the optimal capacity of EA in lowering the complexity of the HNN searching of RANkSAT logical towards error accumulation by reducing the number of iterations in the optimization process.

Results and Discussion
The optimal behaviour of the Hopfield neural network models based on mR has been recorded in Table 1.
The efficiency of the Election algorithm (EA) are observed in comparison with other metaheuristics search approaches in hopfield neural network for RANkSAT representation leading to E F RANkS AT → 0. Table 1, EA-HNN-RANkSAT and ES-HANN-RANkSAT can retrieve more accurate neural assignment that leads to the best global solution during the training process, 205  while in ABC-HNN-RANkSAT model, some neural states were stuck at NN = 60, NN = 60, NN = 90, NN = 110 a suboptimal local solution,but still managed to achieved closed to 92% success. Meanwhile, ES-HNN-RANkSAT can only accommodate NN ≤ 60, as the model exceeds the running time threshold, the neuron complexity increases, which is particularly problematic in the case of inconsistent interpretation ¬F RANkS AT . The core objective of incorporating metaheuristics in artificial neural network is to increase the flow of learning process by reducing the sensitivity of the neuros complexity, allowing the neurons to progress into relaxation and recovery stages sucessfully.
The EA-HNN-RANkSAT model was able to recovered a much more appropriate final configuration that leads to a global minimum solution. This confirms the robustness and higher efficiency in neuro-searching embedded by EA to strengthen the HNN learning process for RANkSAT logical representation. If the mR of the model HNN network approaches one after the computing cycle, all solutions generated in the network have achieved global minimum energy [19].
It can be observed from Figure 3 and Figure 4 that the learning errors measured in terms of SSE and MAE increase massively as the neurons passed NN ≤ 20. Figure 2, presents the trend of performance based on SSE measure. The high accumulation of SSE was demonstrated by ES-HNN-RANkSAT. EA-HNN-RANkSAT accumulated lower error with close to 98% The SSE and MAE values for HNN-RANkSAT searching behaviour have been observed to increase rapidly as the network's neurons become more complex. Compared to EA-HNN-RANkSAT and ES-HNN-RANkSAT, the proposed EA-HNN-RANkSAT was able to obtain E F RANkS AT → 0 with a lower error accumulation. This is due to the EA scanning process involving multiple optimization surfaces that have a better filtering point in the state space, enabling the selection procedure to get excellent performance leading to E F RANkS AT → 0 in fewer iterations. In comparison to ES-HNN-RANkSAT and ABC-HNN-RANkSAT, EA-HNN-RANkSAT registered lower SSE and MAE, as per error analysis in Figures 2 and 3. This study looked into the robustness of EA in decreasing the resistance of the HNN to error occurrence by lowering the number of iterations to the bare minimum. Consequently, in ES-HNN-RANkSAT, the study of mR, SSE, and MAE abruptly end at NN = 60. This may be attributed to the ineptness of the learning method used in ES-HNN-RANkSAT, which can't handle ambiguity. As a result of several fluctuations by neurons, the solutions were crafted at a sub-optimal solution (wrong pattern). It is clear that EA-HNN-RANkSAT agreed with ABC-HNN-RANkSAT but outperformed ES-HNN-RANkSAT in searching optimal representation RANkSAT logical representation. Figure 4 displayed the HNN-RANkSAT models according to their running time during the implementation cycle. The proposed EA-HNN-RANkSAT was able to execute NN = 90 within 513.23 seconds faster than ABC-HNN-RANkSAT which produce NN = 90 in about 763.02 seconds. The conventional ES-HANN-RANkSAT model was able to withstandNN ≤ 60 in 1026.53 seconds. Examining the CPU's use patterns in Figure  4, it could be observed that, the RANkSAT logical clauses are becoming complicated and complex, the searching for global solutions to the RANkSAT logical clause required more effort which subsequently required more execution time to achieve. The search capacity in ES-HNN-RANkSAT demands more time in to search 30 ≤ NN ≤ 60neurons. The EA-HNN-RANkSAT and ABC-HNN-RANkSAT are similar in their running time to 10 ≤ NN ≤ 80neurons. However, EA-HNN-RANkSAT was slightly faster than ABC-HNN-RANkSAT at the initial and final searching process. This is because more neurons are reqyured during the training phase to allow network to migrate through the energy level and best on optimal solutions. In other words, as the number of neurons grew, the number of errors accumulated less in EA-HNN-RANkSAT, which subsequently reduce the CPU time comsumption. In this case, ES-HNN-RANkSAT required further iterations to find the best solution that corresponds to E F RANkS AT = 0, which took more CPU time.
In the context of the RANkSAT logical representation k ≤ 3, the robustness of integrating EA to promote the training phase of HNN can be seen. Even during the learning process, EA's stochastic scanning behaviour expands the structure HNN for appropriate RANkSAT representation. Consequently, the EA-HNN-RANkSAT model's composition would reflect the diversification of the final neural states. As a result, the EA-HNN-RANkSAT, where the chance of obtaining diversified F RANkS AT solutions is much higher, the solutions are dynamically swapped. As a result, EA-HNN-RANkSAT will produce a greater variety of F RANkS AT logical clauses that are achievable leading to E F RANkS AT = 0. The essence of HNN-RANkSAT, on the other hand, will present difficulties in the event of conflicting assignment. As the number of neurons grew throughout the trial, the integration of EA in HNN dealt systematically with the higher learning complexity. This paper investigates the robustness and efficacy of EA's local quest and global approach. The robustness of the local and global search capability involved in EA, which serves as the learning mechanism in HNN, is linked to the success of EA-HNN-RANkSAT in exploring the global solution. The robustness of the local and global search capability contained in EA, which serves as the learning mechanism in HNN, is linked to the success of EA-HNN-RANkSAT in hunting for the global solution F RANkS AT . At the early stages of EA, where the number of neurons is limited, the local search potential has a major impact. This paper investigates how to improve the control parameters in EA to improve the learning process and achieve optimum E F RANkS AT = 0 logical representation. At the outset, a candidate selection optimization operator is needed to speed up the process of choosing the most qualified candidate to act as a leader (solution). Multiple optimization layers employed by EA-HNN-RANkSAT to diversify the approach space and increase the searching capability in a specific area [11]. The positive advertisement is the first optimization layer in EA, which optimizes among candidates in a specific political party in the search space. The negative advertisement is another layer that allows other candidates from another party to take the supporter from their party. The coalition's policy has the potential to have a huge effect in attracting the largest number of people who support the party's manifesto in seeking global solutions. Within a fair period, this process would provide a collabora-tive candidate of equal fitness [12], [10]. Because of these features in the EA, the hybrid proposed model can reduce the number of iterations an HNN is needed during the learning process by ensuring that there is a minimal amount of error accrued after the experimentation. The EA-HNN-RANkSAT model's systemic solution search space can make the local and global search processes easier to achieve global solutions. The hybrid approach will effectively search for the optimal solution in all of the described spaces thanks to the partitioning mechanism of the search space. Finally, the EA-HNN-RANkSAT model has a shorter computing time because of its campaign and alliance mechanisms, which systematically improve the unified party's chances of victory in a decent period.

Conclusion
A hybrid approach was proposed in this work, in which the Election algorithm (EA) was combined with a Hopfield neural network (HNN) to perform RANkSAT representation as a new logical law. The proposed hybrid EA-HNN-RANkSAT model can be conclusively shown to be a robust heuristic methodology that is effective in improving or following preferred assignments, even in clauses of high difficulty, based on the findings presented based on experimental simulations performed. This is related to the EA process's stronger optimization layers, which speed up HNN's learning process in looking for an optimal RANkSAT assignment of greater eligibility. It was revealed that the EA-HNN-RANkSAT model was able to complete the searching process slightly faster than ABC-HNN-RANkSAT and ES-HNN-RANkSAT. Notwithstanding, all of the HNN models under consideration produced excellent results when it came to representing RANkSAT logic in HNN and computing the global solution to E F RANkS AT = 0 within the confines of a reasonable CPU timeline. As a result, in terms of mR, MAE, SSE, and CPU power, EA-HNN-RANkSAT faced less numerical strain during the training phase than other models. Finally, other metaheuristics approach such as firefly algorithm, dronefly algorithm etc will be hybridized with the Hopfield neural network (HNN) to enhance its computational process. We will also explore other NP problem such as a reliability problems in the Hopfield neural network. The study will be expanded to include reverse analysis (RA) for actual data sets using data mining tools.