Both insurer's default risk and insured's surrender events are important factors in pricing unitized participating life insurance. In order to extend the results presented in our earlier research, we suppose that both surrender and default could happen at the end of each insurance year. In this paper, three models are constructed for pricing surrender option with insolvency. An iterative algorithm based on Monte Carlo simulation method is then implemented in order to compute them. The findings are as follows: price of unitized participating life insurance could be badly underestimated without consideration of surrender option, price of which is highly sensitive to some exogenous variables such as volatility. It was also implied that growth in default risk tends to result in growth in value of surrender option.
Measuring the credit risk and economic capital is one of the most important aims in risk management for commercial banks. The authors use Johnson transformation to solve the problem of economic capital under non-Gaussian data. By transforming real data to standard normal distribution, the authors overcome the restricts of Monte Carlo simulation in Copula method, which often requires the Gaussian or t distribution. It is convenient to simulate the correlated default time and the economic capital for banks. From the comparative analysis, the Johnson transformation is feasible and reasonable. This paper provides an effective quantitative method in risk management for commercial banks under "Basel Capital Accord" (Basel II) and the algorithm has a certain reference.
Based on the considering of new investment opportunities, this paper sets up a controlling shareholders occupation model under the intemperate investment and analyzes the temporal reduction of controlling shareholders. It is discovered that rational shareholder will control through the second act of the optimal reduction of the proportion of occupied. Theory model shows that the main factors affected reduction include initial proportion of controlling holders, the characteristic of controlling shareholders and the external legal protection. The initial proportions of controlling shareholders and external protection efforts of investors have positive impact. Comparing to non-state holding shareholders, the state-holding shareholders don't prefer reduction. Empirical evidence shows that initial proportion of controlling holders and the characteristic of controlling shareholders have the effect. But it is uncertain that the factors of the external legal protection impact the level of controlling shareholders' reduction.
This paper was based on the analysis of the basic credit default events of counterparty risk, using survival analysis technology to research the price of credit default swap. Study shows: (a) The different credit event is influential on the price of credit default swaps. CDS prices contain counterparty default events would be even lower than that without contain counterparty; (b) The default correlation between reference asset and seller is crucially important in CDS valuing. No matter the correlation coefficient is positive or negative, it will influence the reasonable valuation of credit default swap.
Based on price duration and volume duration, this paper introduces two indexes to describe the market liquidity. Results show that there is conflict between the two indexes. To find out which is more significant, liquidity ratio is introduced to demonstrate the liquidity situation. Using the new indexes, this paper studies the intraday trend of liquidity and makes model to find out the factors influencing the liquidity of futures market. Empirical results show that both volume and open interest influence the liquidity positively, while absolute yield has significant negative influences. What's more the volume's influence is more significant. The results also show that price spread is not suitable for Chinese market. We should consider the volume when measuring Chinese market's liquidity.
The paper studied personalized pricing strategies in a duopoly market. The result shows that for either myopic or strategic consumers, personalized pricing increases the profit of the firm with a larger loyal consumer base. If the sizes of the loyal consumer bases are not too different, personalized pricing also increases the profit of the firm with a smaller loyal consumer base. However, if the sizes of the loyal consumer bases are much different, personalized pricing decreases the profit of the firm with a smaller loyal consumer base. In addition, in a strategic consumer market, compared to a myopic consumer market, the parameter space that allows the firm with a smaller loyal consumer base to benefit from personalized pricing is smaller.
Trade credit is a main source for enterprises to improve competitiveness, promote products sales, and enlarge market share in the fierce market competition. And thus, it has received tremendous attention from enterprises and scholars. This paper studies the collaborative procurement problem for multi-retailers who form a purchasing alliance under the permissible delay of payments offered by the supplier. In this paper, the ordering cost, the purchasing cost, the holding cost, the interest earned and the interest charged are considered; the cost allocation problem for collaborative procurement on perishable products is formulated as a cooperative game. It is proved that the corresponding game is subadditive and balanced. Furthermore, an allocation rule is proposed and proved to be a core allocation. The results show that the optimal replenishment cycle length will be decreasing when the scale of the purchasing alliance is increasing; the alliance's cost is less than the sum of retailers' independent cost, and thus retailers are prone to form a purchasing alliance; collaborative procurement leads to a decrease in each retailer's cost.
This paper develops three-stage production and ordering models to discuss supply chain coordination mechanisms with options under demand information updating. At the first stage, the retailer makes decisions on fixed order and option quantities, and then at the second stage, which begins with demand information updating, the retailer respectively makes adjustments to them. In the third stage, the demand will be realized; if there is shortage, it will be satisfied first through exercising option contract and second by invoking an urgent order. The option contracts consist of two option prices, an exercise price, and a wholesale price under the urgent order. Both of the latter two can be expressed as binary implicit functions of the two option prices. We simulate the three-stage production and ordering mode with option contracts, and find that and both the system profit and the two members' profit have got Pareto improvement, and the option prices and the proportionality factor between them play an important role on decisions of ordering quantities and the allocation of the extra system profit. Moreover, both a smaller first-stage option price and a higher second-stage option price will bring motive for the manufacturer to speculate; to posses some negation power is the just rational choice of the retailer.
The capital-constrained retailer can get finance from the bank under the manufacturer's guarantee in supply chain finance. The manufacturer will decide whether give guarantee to the retailer and the corresponding wholesale price. The expected profit models of the manufacturer and the retailer are founded under the framework of the newsvendor model. There is a certain optimal order quantity to the retailer under a certain wholesale price. The retailer will choose to finance only if the order quantity is in a certain range. There is a certain optimal order quantity to the manufacturer within this order quantity range. The manufacturer's optimal policies are decided by comparing the expected profit of financing and no financing.
The government provides housing allowance for low-income group, which is one of the important measures to achieve social fairness. But this way will reduce the efficiency of resource allocation, and make welfare loss for economic society. How to measure the size of welfare losses caused by housing allowance is an issue of concern. Two research models have been established, in which the impact of housing allowance on social welfare have been investigated for sufficient and not sufficient subsidy funds. It is shown that the size of the loss of social welfare depends not only on the amount of housing allowance, but also on the way of housing allowance in the affordable housing market. The more the price decline caused by housing allowance is, the higher resource allocation distorted, greater the welfare losses. The smaller benefits range of allowance is, the greater the welfare losses when the allowance fund is fixed. The higher market price or the larger trading volume is, the greater welfare losses caused by allowance. The greater market demand elasticity or supply elasticity is, the greater welfare losses caused by allowance. However, welfare losses caused by allowance are not infinite, and it has a maximum value.
According to the connotation of scientific outlook on development, this paper constructed the comprehensive evaluation index system for provinces from economy, environment, society, the all-round human development and the technological development on the basis of the authoritative organization indices. To make the weights by improved Group-G1 and build the evaluation model of scientific development concept based on improved Group-G1, and then the model was applied to 14 provincial level administrative regions of China. The special and contribution of this paper lie on three aspects: Firstly, it made a consistency test on different expert orders and eliminated the expert order that didn't pass the consistency test to avoid logic chaos of index orders. Secondly, it used the train of thought on revise circularly to unify the two index importance orders of different experts that passed the consistency test and get a unified order gathering different expert advice in order to solve the problem of expert inconsistency during group decision-making. Thirdly, it avoided the subjective determination problem of G1 weighting and settled the objective allocation problem of subjective and objective weights through determining the G1 importance ratio of the indexes by the standard deviation ratio of two adjacent indexes.
Agent-based modeling has become an important method for analyzing the complexity in socio-economic system. But due to lack of effective model validation, this method has not been accepted by the mainstream researchers. The paper compares agent-based modeling with equation-based modeling in aspect of modeling procedure, and reviews the main methodological problems and relative research advances in validation of agent-based models from three aspects: outcome validating, procedure validation and model docking. We argue that standardization of agent-based modeling tools, consistency judgment, sensitivity analysis, and parameter space reduction technique will be the promising research directions in validating agent-based models.
It is a hot academic topic to identify the influencing factors of airports competitiveness which are numerous and interrelated. This study improves the traditional decision-making trial and evaluation laboratory (DEMATEL) method according to its limit and proposes BP-DEMATEL method that is suitable for the influencing factors identification. It exploits BP neural network to calculate the weights between object index and influencing factor index and uses the weights to get the direct-relation matrix, then takes advantage of DEMATEL method to study the influencing factors. The empirical analysis of airports competitiveness shows that this method is feasible and can supply theoretical support, so BP-DEMATEL method enriches the theory and method in studying influencing factors and supplies the possibility of exacting fundamental influencing factors effectively.
Combining the principles of catastrophe progression method and index deviation degree, this paper develops an obstacle diagnosis model based on the catastrophe progression method. The model decomposes diagnosis targets multilevelly, generate catastrophe fuzzy membership functions combining catastrophe theory and fuzzy mathematics, calculate index deviation degree quantitatively, and then get obstacle level of all indexes, so as to sort and analyse the obstacle level of index. With the dynamic developing features of diagnosis objects taken into account, the model offers a new thinking of obstacle diagnosis without index weight, expanding the applications of catastrophe progression method. We apply the model for diagnosing growth obstacles of 327 listed SME companies in 2009. Thus we get obstacle level of 5 aspects including growth ability, profitability, capital operation ability and market expectation, enterprise scale of each SMEs, expanding the study of SMEs growth and proving the validity of this model.
This paper develops a new framework for the analysis of byproduct gas supply in iron and steel industry. Based on the online theory, this paper studies the online balance assignment problem of the byproduct gas. In the production process, the online decision maker makes decisions to balance the supply of byproduct gas in the online case. Under some reasonable assumptions, the paper constructs a balance assignment model of the byproduct gas to solve the balance range unknown problem. First, this researcher attempts to analyze the special nature of the offline case and the characteristics of the online case under different scenes. Then the paper designs an online balance strategy with a competitive ratio of (1+3M/m)/4 without the knowledge of supply interval. This study provides some ideas and theories for the balance assignment and use of the byproduct in manufacture industry.
This paper focused on relief materials dynamic scheduling after large emergency. We quantified the loss of victims in terms of unmet needs, and formulated a mixed integer programming model in order to minimize victims' loss and vehicle scheduling expense. The main decisions were determining the delivery routes and allocating the relief supplies from relief distribution center (RDC) to save points without sufficient vehicle quantity. We used the hierarchical thinking to reduce solution space, improved the coding method and designed the genetic algorithm for this problem. Finally, experimental results show the dynamic scheduling scheme under different vehicle quantity, and determine the optimal vehicle quantity in RDC through comparative analysis, the validity of this model and algorithm is verified combining with realistic distribution scheme.
Intuitionistic normal fuzzy numbers as well as their operational laws and Euclidean distance are defined, and some intuitionistic normal fuzzy aggregation operators including weighted arithmetic averaging operator and weighted geometric averaging operator are also defined. For multiple criteria decision making problems, in which the criteria values are intuitionistic normal fuzzy numbers and the criteria weight information is incomplete, an approach based on intuitionistic normal fuzzy aggregation operators is proposed. The optimal criteria weights are obtained by an optimal model based on the minimum of the sum of the distance between every two alternatives. And after aggregating criteria values by using intuitionistic normal fuzzy aggregation operators, the comprehensive evaluation values of all alternatives can be gotten. Then, the ranking of the whole alternatives set can be obtained by comparing the relative closeness of alternatives to the ideal and anti-ideal solutions. An example is given to show the feasibility and availability of the method.
For the stochastic multi-criteria decision-making problem, in which the information on criteria's weights is incomplete and the indices value of alternatives are in the form of intuitionistic fuzzy numbers, an intuitionistic random decision-making approach based on MYCIN certainty factor and prospect theory is proposed. According to the score function of intuitionistic fuzzy numbers and prospect theory, the MYCIN certainty factors of different alternatives in different indices are obtained. And the trust degrees of different indices are determined by using the grey incidence analysis. Then, the certainty factor fusion method in different indices is deduced to get the optimal alternative. It is proved that the fusion method satisfies the commutative law and the associative law. And the best alternative is got by using the method. Finally, an example shows the feasibility and validity of this method.
This paper considers a single machine scheduling problem, in which the release dates of the jobs are convex decreasing functions of the consumed resource. The objective is minimizing the total resource consumption with a limited Makespan. For the NP-hard problem in the strong sense, two basic operators, left-move and right-move, and two neighborhood generation methods, insert and swap, are defined to build a simulated annealing algorithm. To evaluate the accuracy of the proposed algorithm, the problem is relaxed to an assignment problem to obtain a lower bound by Hungary method. The computational results show that the presented simulated annealing algorithm can yield high-quality solutions for the considered scheduling problem.
The main diagonal dominant principle is put forward to judge equilibrium in joint mixed strategies for bimatrix games. We discuss the relationship between mixed strategies and joint mixed strategies, and the relationship between Nash equilibrium and equilibrium in joint mixed strategies, in terms of payment. And we find equilibrium in joint mixed strategies through improved PSO algorithm.
Circuit simulation is an important technique to decreasing the cost and improving high efficiency in the circuit design. However, the existing circuit simulation software still can not meet the dual demands of the industry on the accuracy and speed. This paper used response surface method to establish the approximate relationship of the output and input against MOS elements. It simplified complex circuit calculation as linear calculation. As a result, it greatly reduces the work and increases the speed. Meanwhile, in order to ensure the accuracy, the selected random points in the simulation has been improved by using more uniform and representative quasi random number. For the same number of random points, quasi random number makes the better results than the pseudo random number. This can guarantee the accuracy and improve the efficiency. We get good results by the empirical tests.
In geographically and temporally weighted regression model, the estimates of the coefficients are obtained by geographically weighted fitting technique where spatial-temporal weighted distance is used in weighted matrix. Then, appropriate statistics for testing the temporal and spatial nonstationarity of the estimated coefficients are proposed and the p values are calculated with the third-order moment χ^{2} approximation method. Finally, the simulation example and real example show that the test methods are valid.
To investigate the opportunity and perspective of developing Chinese oil carrier fleet and to rationalize the size and composition of the fleet, a series of mathematical models and methods for fleet allocation and planning have been studied, strategic planning of developing the fleet for importing and exporting crude oil for China has been systematically investigated, and the strategic development planning from 2000 to 2010 of Chinese crude oil fleet has been worked out. After eleven years of practice in the planning horizon, this paper presents the methodology and outcomes produced then and compares the outcomes with the corresponding facts of Chinese crude oil fleet development in the period from 2000 to 2010. Results show that the mathematical models and methods employed in the case study are of rationality and practicability, study on fleet allocation and planning with modern systems analysis technology is of special strategic significance to fleet investment and development of big shipping companies. Finally, experiences in planning a fleet for a country or for a big company are summarized and suggestions for doing better in this regard are given.
Defining level of shunting safety is critical to railway safety management. In view of the imperfection on index system in procedures of shunting safety evaluation, on the basis of using for references of previous research results and considering factors of personnel, equipment and environment, evaluation index system of shunting safety was established. Then quantitative and qualitative evaluation method based on connection number was put forward by means of set pair analysis theory, this method calculated comprehensive connection number, then quantitatively evaluated safety level by eigenvalue and qualitatively analyzes situation by set pair potential. Finally, index system and this method was applied to railway station, results show, safety of shunting can be characterized preferably by the constructed index system, shunting safety evaluation is consummated, the method is available for evaluating safety level quantitatively and qualitatively, and have a widespread meaning more so as to provide a new research method and idea for safety evaluation theory.
An analytical hierarchy process (AHP)-based three-level assessment system (including target, criterion, and index) was established to cover all elements relevant to rural road maintenance management. This paper applies the artificial intelligence (AI)-based fuzzy neural network approach to the rural road maintenance management evaluation to handle problems related to knowledge acquirement and accumulation that is essential to comprehensive evaluation. Modular design, coupled with fuzzy theory and the neural network approach, was employed to tentatively develop a rural road maintenance management assessment model with built-in expert knowledge. Indexes in this model are initially fuzzified according to fuzzy theory, then analyzed in the multi-layer neural network, and conversely defuzzified to produce data that support and finalize the rural road maintenance management assessment. An example was given to illustrate the working mechanism of this assessment system, and to prove the feasibility and validity of the fuzzy neural network-based assessment model.
This paper establishes a user equilibrium model based on the cumulative prospect theory (CPT) for a degradable transport network and shows the solution existence of the model. In numerical experiments, the study calculates the network equilibrium flow pattern based on the expected utility theory (EUT) and CPT, respectively. The results of the comparison between them indicate that the CPT-based model is superior to the EUT-based model in describing the travelers' route choice decision processes.
The multi-way dataflow model of computation was invented, and an approach to hierarchical compositional modeling was proposed for cloud cyber-physical systems. A cloud cyber-physical system comprises a large quantity of interconnected embedded devices and computer nodes. Development difficulty lies in the great complexity causing challenges in design and management, and in the massive data size. A feasible solution is to construct abstract design models with multi-way dataflow, to utilize automation tools for simulation, verification and optimization, and to map designs to final systems. Multi-way dataflow extends traditional dataflow and the MapReduce computation framework of Google. It simplifies design, and also facilitates the implementation of final systems in cloud computing environments.
Many real-world optimization problems are both dynamic and multi-modal, which require an optimization algorithm not only to find as many as possible optima under a specific environment but also to track their trajectory over dynamic environments. To address this requirement, this paper investigates a memetic particle swarm algorithm for dynamic multi-modal optimization problems. Within the framework of the proposed algorithm, a new speciation method is employed to locate multiple peaks and an adaptive local search method is also hybridized to accelerate the exploitation of species generated by the speciation method. In addition, the re-initialization schemes are introduced into the proposed algorithm in order to further enhance its performance in dynamic multi-modal environments. Based on the moving peaks benchmark problems, experiments are carried out to investigate the performance of the memetic particle swarm algorithm in comparison with several state-of-the-art algorithms in the literature. The experimental results show the efficiency of our proposed algorithm for dynamic multi-modal optimization problems.
Swarm intelligent algorithms are derived from the simulation of natural evolution or collective behavior of animals to seek solutions of complicated optimization problems by exploring and exploiting the search space efficiently and effectively. Through the analysis on the search characteristics of swarm intelligent algorithms, the concept of solution set diversity is introduced in this paper according to the infrastructural changes of solution sets in search processes. Two categories of fundamental search strategies, i.e. the diversification search and the intensification search, are then defined and their influences on the stagnation of solution sets evolution are investigated on the basis of the solution set diversity. It is proved in this paper that an intensification strategy inevitably leads candidates solution to a single solution, which is one of the main sources of stagnation; while a diversification strategy is able to reach any point of the coding space from each initial candidate solution, i.e., the whole searching space is the reachable region of the diversification strategy, but the convergence of the algorithm cannot be guaranteed. Three popular swarm intelligent algorithms, i.e. the canonical generic algorithm, the ant colony system and the discrete particle swarm optimization, are tested with a benchmark problem, and the results support the theoretical conclusions.
A dissimilarity based kernel space for hybrid manifold learning and support vector machines (SVMs) classification algorithm is proposed to solve the high-dimensional data classification problem. The SVMs classify the low-dimensional embedding of the manifold learning method in the algorithm. The dissimilarity based kernel space is constructed by using a constant additive method for SVMs. The constructed kernel space is guaranteed to be positive semi-definite in dissimilarity description and meet the Mercer condition. The simulation compares the kernel space with several common kernel including linear, polynomial and Gaussian kernel on data sets selected from UCI benchmark repository to validate its effectiveness and the dissimilarity based kernel space is an optimized kernel space for hybrid manifold learning and SVMs classification problem.
A radius basis function (RBF) network inverse control method is considered for a class of multi-input multi-output (MIMO) nonaffine nonlinear system. In this method, the nonaffine term which is hard to invert is decomposed into invertible part and un-invertible part, the invertible part is used to approximate the inversion of system and the invert error is approximately canceled by the adaptive signal, the un-invertible part is used to update the RBF network's weights. Using Lyapunov's direct method, it is shown that all the signals of the closed-loop system are uniformly ultimately bounded. Simulation results are provided to show the good tracking performance and effectiveness of the proposed method.
In order to deal with fog of war problem, especially the incomplete information appeared in decision attribute, which confronted by computer generated forces in the process of behavior decision making, a behavior decision method based on evidence theory is proposed. At first, the decision attribute credible level normalization expression of the exact, fuzzy and uncertain information is presented based on the analysis of the characteristics of battlefield decision information. Then put forward the executed credible level for the uncertain feature of candidate schemes. Finally integrate the credible level information of multi-attribute information through D-S rule of combination. Experiment analysis illustrates the feasibility and the effectiveness to handle the incomplete information of this method.
Particle swarm optimization (PSO) as a novel intelligent optimization algorithm has been used successfully in many fields, but its application to flood hazard risk assessment is a new research topic. This paper introduces the theory and flow of application of particle swarm optimization rule mining (PSO-Miner) algorithm to flood damage risk assessment. This paper selected Beijiang River Basin, China, as study area for flood damage risk assessment based on PSO-Miner algorithm and BPANN method. The results of a case study indicate that the advantages of PSO-Miner algorithm can be summarized as follows: It does not assume an implicit assumption for processing dataset and has strong robustness; it can mine very simple assessment rules; it can have a better performance than BPANN model. So the PSO-Miner algorithm provides a new approach for flood risk assessment.
Impacted by the time randomness of pit submerged and recovery, uncertainty factors such as process duration, effective construction days and the completion date, lead that the construction progress delay of hydropower engineering under the condition of earth-rock overtop cofferdam is unpredictable. Based on the online characteristics of construction progress risk, the construction progress risk of hydropower engineering is analyzed and then an online progress risk model is built. Furthermore the competitive ratio of the strategy is calculated. In the light of the relationship between the general online strategy and the optimal offline strategy, the risk-reward model to optimize the risk compensation strategy is introduced. Simultaneously, the risk compensation strategy is optimized according to the different decision-makers' risk tolerance and forecast.
Based on the partial least squares, the form factor of displacement deep-vee vessel was regression analyzed and model. The regression formulary application range was been taken out, and the calculation results was compared to the experiment's and Holtrop method's. Some results are revealed that the length beam ratio is the most important to explain the form factor in these parameters of hull; the second is draft length ratio, dead rise and angle of entrance ratio; and the influence of the run length and the prismatic coefficient are weaker. The regression formula of displacement deep-vee vessel's form factor was used at L/▽^{1/3}= 7.0-7.9、 Fr < 0.44. At the same time, the results of the Holtrop total resistance method with the formula are more accurate than the original Hotrop method. The partial least squares regression method is applicable to regression the form factor of displacement deep-vee vessel.