search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
OPTIMIZATION ALGORITHM


for the MFS by utilizing two different search algorithms. Like many other computational methods, the origins of the MFS sprang from the convergent growth of computational think- ing that coincides with the evolution of computational power as predicted by Gordon Moore [2]. The relationship between classic generalized Fourier Series theory [3] and the MFS, as well as many other computational approaches, including the more specialized Complex Variable Boundary Element Method (CVBEM) [4], is readily apparent. Many of these com- putational techniques can be shown to be generalized Fourier Series using specialized basis functions. For example, DeMoes (2018), in “35 Years of Advancements with the Complex Variable Boundary Element Method” (examines four different families of complex variable analytic basis functions [5]. In that paper, the computational approach is identical between schemes except that the basis function family is different. Yet, all these methods have the same underpinnings rooted in the generalized Fourier series approach to solving Partial Differential Equations (PDE). Additionally, the placement of both modeling node and collocation points were predetermined to be uniformly distributed without attempt to optimize the node and collocation point locations. In [1] and [6], among other papers [7–10], attention is paid to examining how to select locations for positioning computational nodes, among other issues, with no clear conclusion as to the best method for selecting computation node locations. For example, in [9] Carlos Alves uniformly distributes collocation points on the problem boundary and sources outside the boundary without determining which locations are best with use of the MFS. In  locations that were ”satisfactory” without proving the source locations to be the global maximum [10]. These papers indicate that there is significant variation in computational results  locations. The second is the choice of collocation point loca- tions. In the current paper, the focus is toward presenting a computational algorithm that addresses the computational node positioning problem by saturating a surrounding space of the problem domain with candidate node locations to be subsequently assessed in multiple node models based on the MFS, using the standard source function to generate basis functions. Of course, other PDE formulations and the choice of basis functions can be examined accordingly as long as they satisfy the Laplace equation and are analytic. Because colloca- tion point locations are also subject to end-user preferences, the presented positioning algorithm used for selecting node locations is also applied to selecting collocation point locations on the problem boundary. Consequently, a set of ordered pairs of combinations of candidate locations of (node, collocation point) are developed and then examined as to computational model performance.


Thus, the approximation function includes node location and collocation location as variables as well as node and col- location point ordered pairs. The effectiveness of a particular model is measured, in this paper, by consideration of the usual


RMS error (or E2 error) in matching problem boundary condi- tions and also examination of the maximum absolute value


(or E Obviously, other error norms can be examined. In the current paper, the effectiveness of the model is described by the dual measures of (E2,E).


The algorithm examined in this paper initiates by assessing the effectiveness of using a single node MFS model. This is the N=1 situation of the algorithm. All candidate node locations are examined, in turn, in developing the respective single node


www.aipg.org


MFS model. Furthermore, the node positioning is cascaded with all candidate collocation point positions, producing a set of single node MFS models, each with a different node and collocation point combination. Once the entire space of said combinations are examined, the algorithm chooses the posi- tioning ordered pair that has the minimum error measure out- come. This positioning order pair is then considered optimized  algorithm then continues to the N=2 situation by developing all possible two-node and two-collocation point combinations. As with the N=1 situation described above, all possible MFS models are developed and the corresponding error measures  - ation described above are retained. As before, the algorithm chooses the second node and the second collocation point loca-  This completes the N=2 situation. The algorithm continues to the N=3 situation, and hence to larger N value situations, fol- lowing the procedures described above. As the N value of the situation increases, the approximation computational error measure is reduced.


However, the use of the computational MFS involves issues such as the stability and accuracy of the underlying matrix solver. In our work, the matrix solver is a barrier that was not further examined. But because the algorithm results in a reduced error measure as N increases, the computational experiments indicate that fewer nodes and fewer collocation points can be used yet produce computational error measures that are as low as when using much larger but uniformly distributed node and collocation points. This means that with fewer nodes and collocation points involved in the MFS model, the matrix solver issue is generally more successful in produc- ing a stable outcome.


Optimization Algorithm Description


There are three types of modeling points that are used to determine the approximation function and its accuracy. The three types of points are candidate nodal points, candidate collocation points, and evaluation points. The candidate nodal points are points positioned exterior of the problem boundary that ultimately are the location of the basis function nodes used in the approximation function. The collocation points are points located on the problem domain that have known potential values and are used as the boundary conditions when determining the coefficients for each basis function in the approximation function. Lastly, the evaluation points are points on the problem boundary at different locations than the collocation points that enable the determination of error in the approximation function. Unlike the collocation and nodal points, evaluation points act independently of the other two model points. Nodal points and collocation points are related in that the pairing between one nodal point and one collocation point determines the coefficient of the basis function at that  the error associated with the approximation function. Root means squared (RMS) error is used as the evaluation criteria for optimum node location, and also maximum absolute error (Max error).





 each collocation point and the RMS error and Max error associ- ated with that approximation function must be recorded. The following algorithm outlines the process by which nodal point and collocation point pairs are determined and optimized.


Jan.Feb.Mar 2019 • TPG 7


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64