In this paper we establish an entropy spatial model of credit risk contagion in the credit risk transfer (CRT) market which considers the effects of spatial factors, industry-specific factors, regional financial factors and individual factors of the CRT market on credit risk contagion in the CRT market. We use numerical simulation to analyze and describe the influence and active mechanism of the spatial distance and transmission capacity between banks and investors in the CRT market, banks' asset quality and credit risk transfer ability, investors' asset scale and risk preference level, financial development level of investors in the region and the homoplasy between banks and investors in the region on credit risk contagion in the CRT market. This contribution explicitly formalizes the connection between probability and spatial factors, and provides a new idea and theoretical framework for the study of credit risk contagion in a spatial context.
Using initial public offerings (IPO) company data between 2009 and 2014, we examine the relationship between media coverage and turnover on the first day of initial public offerings. The media attention is positively related with the turnover rate on the first day of IPO, which is examined when the reputation of the underwriter is low. The relation of media attention and IPO turnover is weakened by the strong signal effect of high-reputation underwriter. Furthermore, we divide the sample into two sub-samples by the policy changes, one with IPO companies before the policy change in 2013, the other with IPO companies after the policy change in 2013. It turns out that the effect of media sentiment on IPO turnover disappears after the policy change, which may be explained by the substitution between policy changing and information dissemination of media. Using an IPO sample to eliminate the other interferences, the paper verifies the infomediary role of media in information market, and detects the environment under which it works.
New individual income tax rates take effect on September 1, 2011. Besides these direct taxes, indirect taxes, such as consumption tax, also have an impact on individual's disposable income. How do these different taxes influence the welfare of society and economic activity in China? What are the suggestions given to policymakers? The research analyzes how personal income tax and consumption tax influence all aspects of the society, ranging from international trade to GDP. This paper builds a computable general equilibrium model and employs the 2010 national statistical data to indicate that both the reduction of consumption rate and personal income tax rate for individuals in rural areas and the increase of personal income tax rate for ones in urban will improve the welfare and income of rural population. Furthermore, expansion of imports in primary industry and decline of exports in secondary industry will raise the people's living standards, increase the overall well-being and promote the growth of domestic production. Besides, when net imports in primary industry and net exports in secondary industry increase simultaneously, the macroeconomic indicators will be better of at the cost of huge shock to primary industry. Finally, this paper indicates that elasticity of substitution in utility function has less effect on welfare compared to elasticity of substitution in production function.
The hidden factor is the information that has no or little effect on the current interest rate curve but affects the expected interest rates. Using the hidden factor idea proposed by Duffee, we find that the variance ratio of the five-factor model in China's treasury bond market is only 0.41, which confirms the existence of the hidden factor. Then we exclude the interest rate curve information from the risk premium factor to get the hidden factor, and confirm that the hidden factor in China does contain a wealth of economic information content. It is closely linked with the inflation and the macro-liquidity, especially the inflation information. But the hidden factor has little to do with the economic growth information. At last we find the hidden factor in China has rather great power on predicting future interest rate.
Inter industry linkage combines with inter agency relation to form a network, which is called industrial cluster. Industrial cluster is an important field for studying regional growth, competitiveness, and innovation diffusion. Studying the linkage structure of elements constituting the industrial cluster is of great significance. However, there are few research achievements in this field, especially the quantitative ones. Based on industrial network theory, from the perspective of product linkage, this paper describes the industrial cluster as a network among sectors, and proposes a special structure of industrial cluster, that is, the core structure of industrial cluster, then puts forward a quantitative index system of core structure of industrial cluster, including the aggregation coefficient, radiation factor and centrality coefficient etc. The practical calculation example shows that the index system can describe the characteristics of industrial cluster in different regions, reflect the differences in competitiveness among regions and is an effective method for interpreting competitiveness using industrial cluster core structure.
The electronic word-of-mouth (e-WOM) is one of the most important among all the factors that may affect consumers' behaviors. Opinions toward a product through online reviews will influence purchase decisions of consumers by changing their perceptions on the quality of that product. Furthermore each product aspect may affect consumers' intentions differently. Thus, sentiment analysis and econometric models are incorporated to explore the relation of purchase intentions with aspect-opinion pairs, which can be used to estimate the weight of product aspects. We first identify product aspects and reduce the dimension to extract aspect-opinion pairs. Next the information gain for each aspect is calculated through entropy theory. Based on sentiment polarity and sentiment strength, we construct an econometric model by integrating the information gain to quantify the aspect's weight. We track 386 digital cameras on Amazon for 39 months, and the experimental results show that the aspect weight for digital cameras is detected more precisely than TF-ID and HAC algorithms. The results bridge product aspects and consumption intention to facilitate the e-WOM based marketing.
The global warming problem caused by carbon emissions has significantly influence the sustainable development of human society. Nowadays, more and more countries and regions are taking measures to cope with this problem. Researches indicated that consumers' energy consumption has a significant influence on the carbon emissions. To guide, regulate and motivate consumers' energy consumption behavior is benefit to make a low-carbon energy consumption pattern, which has an important theoretical and realistic significance. Based on the personal carbon trading scheme, we construct an energy use choice model and analyze the market equilibrium of the carbon trading. It is shown that personal carbon trading scheme can motivate consumers to choose clean energy and to make a low-carbon energy consumption pattern. The supply curve of the carbon allowance is different from the general commodity supply curve. When the carbon price increases, the supply quantity of the carbon allowance will decrease. The initial allocated carbon allowance should be set at an appropriate level, thus, the carbon trading among consumers can be achieved.
Due to the fact that, some fast food restaurants which provide take away service may refuse some delivery orders when they receive requests from customers so as to improve efficiency and cut cost. This results in some forms of penalty, like reputation damage, etc. To achieve the goal of minimizing the whole time of serving plus the total penalties of all rejected requests, we propose the real-time version of online travelling salesman problem (OL-TSP) with rejection options and advanced information. We proposed lower bounds and algorithms on the non-negative real line and the real line. These conclusions are more general than the results in the real-time version of online TSP with rejection options.
User-innovation knowledge, as the important source of product and service innovation, is of significance to enterprises. In this paper, we propose a discovering and analyzing method to study the user-innovation knowledge in enterprises' virtual communities based on weighted knowledge network (WKN). First, the user-innovation knowledge presented as forum posts in virtual communities was acquired through the method of web content mining. After processing, the frequency of keywords and the attention degree of each post and keyword were calculated and then were given as two types of node's weight in this WKN model. Next, based on the WKN model, the core knowledge and the relations between knowledge were identified by analyzing the nodes, edges, weights and the ego networks in the WKN model. Finally, Xiaomi community and Huawei community were taken as the study cases to verify the validity of the methods. Compared with the existing method of knowledge discovering, the presented methods are more clearly and thoroughly to identify knowledge and the inter-relations between knowledge points, and it also provides a new tool to study the user-innovation knowledge in enterprise virtual communities.
In this paper, a human resource management model is presented based on the law of diminishing marginal utility from the economics. The proposed method can provide the optimal cost investment program at any given level of total utility or inputs cost, which could enable the optimization of human resources allocation. Simulation results show that the proposed model can improve resource utilization, the quality and efficiency of human resource management with great practical guidance on human resource management practices.
This thesis is to introduce cognitive hierarchy model into the fictitious play and to research the heterogeneous cognitive levels of individuals. Beliefs learning and update rules have an effect on the final equilibrium convergence of coordination game. Research shows that the high cognitive level player's strategy choice depends on the belief attached to the strategy choice of low order cognitive level; player's strategy choice is related to its belief involved in initial strategy choice of its opponent; belief update and game rounds affect the cooperation level of the final game system.
The emergence of cooperation on complex networks is of great importance. In order to investigate the effects of scale-free network's topology features on cooperation density of repeated snowdrift game with Schlag replication strategy, in the frame of tunable scale-free networks (a power-law-alterable scale-free network model and a clustering coefficient alterable scale-free model), the relation of scale-free network's power-law exponent, clustering coefficient, average degree and the cooperation density on the networks was researched. The results show that the network's cooperation level is consistent with uniformity of the network and can be enhanced by a highly clustered topology, in other words, the higher the power-law exponent of a network and the less clustered of a network, the weaker of the network's cooperation; the relation between average degree and cooperation density is non-monotonic.
Because of the complexity, sensitivity and high-risk of prosecution-related letters and visits, any carelessness in reception could cause social problems of harm to people's lives and property safety. This paper is the first attempting to analyze risk prevention strategies of prosecution-related letters and visits using game theory models. Static game models of incomplete information are established to analyze trial strategies; dynamic game models of incomplete information are established to analyze reception strategies. The equilibrium outcomes show that there exist unreasonable factors in the reception strategies of the People's Court, and scientific early warning systems could prevent risk of prosecution-related letters and visits, and efficiently allocate judicial resources. In the perspective of the People's Court, this paper combines equilibrium outcomes of games and risk early warning models, and proposes risk prevention and coping strategies of prosecution-related letters and visits.
Terrorist attack has been one of the main threats that jeopardize public security. We study the defense competition problem between two vulnerable targets. We take into consideration the spatial factors for the first time and derive the optimal defense scope based on Hotelling model. By exploring both simultaneous game and sequential game respectively, we find the former leads to defense competition while the latter may cause free-ride behavior. When the two targets are symmetric, no matter which game is considered, the more valuable the target is, the less probability the target will be attacked. Compared with enlarging defense scope, increasing defense density is more efficient in improving defense level. The optimal defense scope is independent of the target value in the simultaneous game and depends on the target value in the sequential game. The findings in this paper provide guidance in building up defensive structure for different targets.
The composability of simulation models is the core of research of the theory and method of composable simulation. Based on the composition-oriented M&S framework, the significance of the "context" to composability evaluation was described. Through the formal definitions of the application context, the model context and their relationship with the composability, the validity of composability is to determine whether the model context of simulation models and their composition is true under the restriction of the application context, therefore, based on the interface pruning and behavior pruning in the model context, the conditions of model composition which meet the application requirements were put forward, and then the criteria for the four IO levels were induced. The criteria not only help to clearly describe the research level and content of composability evaluation, but also help to carry out the engineering practice of composable design and simulation models' maturity.
In this paper, a new method for transshipment operations at container terminals was proposed, in which parts of the transshipment containers are directly loaded onto trunk line vessels from feeder vessels or vice versa instead of being buffered in storage yard. A berth allocation model based on direct transshipment was developed considering the delay cost of vessels, operation cost of trucks and yard cranes. To solve the model, a heuristic algorithm was designed. Numerical experiments were provided to illustrate the validity of the model and algorithm. Results indicate that the proposed method can decrease the operation cost, improve transshipment efficiency and thus help to improve the competitiveness of container hub ports.
Under scenario of sports events and other kinds of massive gathering, a large amount of people would travel along road networks, thus forming crowded pedestrians. The high density situation might induce in mass panic and stampede, causing serious injuries and casualties. Therefore, it is necessary to firstly understand pedestrian flow features on the corresponding network and then apply proper control method to organize and manage pedestrian flow. Firstly, considering general characteristics of travelers' route choice behavior, all these pedestrians are dynamically assigned onto the road network via a traffic assignment model. Secondly, a nonlinear pedestrian road impedance function is introduced to address the congestion effects of massive crowds, and the pedestrian travel time changing rate is calculated for each link under various traffic volumes. It is proposed to sort the networks links according to the travel time changing rate. The rank is thus defined as a measurement of the structural importance of each link. Further, the structural importance is used to determine link robustness under different pedestrian traffic demand. Combing the result of structural importance and link robustness, bottleneck of the crowd circulating network can be identified. Finally, the Love Parade 2010 in Duisburg (an electronic music festival) is adopted as a case to illustrate the applicability of the proposed network bottleneck identification method. The threshold of pedestrian volume on the network and the effect of road control strategy are further investigated, the obtained basic data and theoretical results can benefit scientific planning and management of massive crowd road traffic.
In some studies concerning M/G/1/∞ queueing models with N-policy, since the waiting time for a customer arriving during a server off-duty period is no longer independent of the inter-arrival times of customers arriving later, it is difficult to investigate the distribution of equilibrium waiting time, and considerable effort is devoted to study the steady-state queue length and additional queue length of queueing system, while there is relatively little work done on the stationary waiting time and its stochastic decomposition. Due to the fact, in this paper we firstly treat the classical N-policy M/G/1/∞ queueing system. We study the waiting time distribution in equilibrium, and present the stochastic decomposition result of steady-state waiting time as well as the explicit expression for the distribution of additional delay time. Meanwhile, some errors on corresponding results in existed references are pointed out. Further, we consider the M/G/1/∞ queueing systems with multiple server vacations and single server vacation under Min(N,V)-policy. By similar analytical method, we not only obtain the stochastic decomposition result of equilibrium waiting time but also derive the formulas for the mean stationary waiting time and mean additional delay time. Especially, some corresponding results for some special queueing systems can be directly obtained on the basis of the results provided in this paper.
In order to research the mean failure times of repairable systems in imperfect repair state, Kijima's virtual age model based on the generalized renewal process (GRP) is improved to describe imperfect repair model. The correlative parameters of the process are estimated with maximum likelihood estimation. First failure time and subsequent failures times are obtained by Monte Carlo (MC) simulation. The mean failure times of system is calculated in imperfect repair state and different time. The data follows the Weibull distribution and literatures are combined to simulate according to the model. The mean failure times from that imperfect repair model, perfect repair model and general repair model are compared in different respectively. The results show that before 7000h, the mean failure times of system is determined with imperfect repair model; after 7000h, the mean failure times of system is determined with the mean value of imperfect repair model and general repair model.
This paper focused on multi-class imbalanced data classification, proposed a BPSO-Adaboost-KNN ensemble learning algorithm based on feature selection and ensemble learning. What's more, the algorithm used a visual AUCarea metric to evaluate the performance of classifier when dealing with multi-class classification problems. Then the paper used 10 groups of UCI and KEEL data sets to test the proposed algorithm. The results show that the proposed algorithm improves the stability of the Adaboost after extract the key features, and the classification accuracy for ten groups of data are 20%~40% higher than the KNN classifier. When comparing BPSO-Adaboost-KNN with other three state-of-the-art ensemble algorithms, BPSO-Adaboost-KNN can obtain equal or better results. At last, the proposed algorithm is used in oil-bearing of reservoir recognition, three key attributes are selected (acoustic wave, porosity and oil saturation) successfully. The classification precision reaches more than 98% in oilsk81~oilsk85 Jianghan well logging data, which is 20% higher than KNN classifier. Particularly, the proposed algorithm has significant superiority when distinguishing the oil layer from other oil layers.
High quality control system should have stability, accuracy and rapidity features. This paper studies the high quality control problem for the complex systems under the multiple-model condition. Firstly, the systems' dynamic behavior with different conditions is described using a bank of models. Secondly, regarding the posteriori probabilities as the coordinate variables, multiple model learning and optimization control algorithm is proposed, which proves that the control law proposed in this paper is equivalent to the control law with the real model under the condition of steady state. At the same time, the controller does not need to switch toughly among multiple models. In other words, it is soft switching strategy. Finally, the two simulation results illustrate the effectiveness of the algorithm.
With the diversification and miniaturization of customers' voices, enterprises need more delicacy management to further reduce cost while meeting service needs. This thesis aims to address the issue of purchasing waste by exploring the specifications and the weight of steel coils that an ERW pipe producer needs to figure out before making a steel coil purchasing plan. Firstly, the thesis takes into account the process of rip cutting which transfers steel coil into steel pipe and effectively utilizes width combination to make split joint coils. By integrating the processes constraints, the thesis establishes a mathematical model which aims at maximum use ratio of split joint coils. The problem belongs to a typical NP-HARD. Then, this thesis proposes an improved genetic algorithm in which length of "gene" and chromosomes are both determined by the orders. In the end, the availability of the model and the stability of the algorithm are verified through enough testing examples. The results show that in most cases the use ratio is increased substantially. The approach will provide powerful theoretical and practical support for manufacturers in cost reduction.
Feature weight algorithm has great impact on the classification results. Traditional algorithms didn't consider distribution information among and inside classes. Therefore, study the impact of ordering degree of feature attributes after clustering, and analyse the distribution of feature attributes, named as adaptive feature entropy weight fuzzy C-means clustering algorithm (AEWFCM), is proposed. Both the clustering features entropy and the information gain are the criteria to adjust feature weights. By clustering iterative optimization weight gradually and continuously updated until the best feature weights obtained. Experimental results show that the AEWFCM algorithm can effectively distinguish the features attributes on the importance of clustering results; and compared with other famous fuzzy C-means clustering algorithms, it can get a higher accuracy in clustering with the same sample.
To improve the efficiency of operating theatre effectively, reduce the hospital's costs and improve the satisfaction of patients, an operating theatre scheduling method was presented based on a Lagrangian relaxation (LR) algorithm. Firstly, a problem domain was described. Mathematical programming models were also set up with objective functions of minimizing related costs of the operating theatre and maximizing the satisfaction of patients. On the basis of the descriptions mentioned above, a solving policy of generating feasible scheduling solutions was established. Combining with the specific constraints of operating theatre, the LR-based algorithm was put forward to solve scheduling problems, and the sub-problems were solved via a branch and bound algorithm. Finally, computational experiments were performed on different scale of problems. The performance of the proposed algorithm was evaluated and compared with that of other approaches. Results demonstrated that the proposed method can obtain better near-optimal solutions in acceptable computation time.
Aiming at the low efficiency of the existing intelligent algorithms in solving the large-scale complex distribution network reconfiguration, a new data structure, called node-depth-degree representation (NDDR), was applied to encode power distribution network, and the NDDR-based genetic operators, which have clear physical meaning, were also constructed. The proposed method could guarantee that the corresponding networks of all chromosomes in the initial population and any chromosome generated in the genetic manipulation satisfy radial constraint, which avoids repeated check and repair of network radial constraint in the existing method and significantly reduces the computational burden. Test results of three sample test systems and three large-scale complex distribution systems show that the optimal solution can be obtained with high probability, which means rapid convergence and good stability of the proposed method. Compared to loop encoding-based GA, NDDR-based GA significantly reduces the computation time and obtains high-quality solution in short time while solving large-scale complex distribution network, which indicates good practical value of the proposed method.
Situational awareness (SA) is a key element impacting operators' decision-making and performance in nuclear power plants. SA reliability is an important part of human reliability analysis. In order to quantitatively assess SA reliability, the Bayesian network model of operators' SA reliability analysis in digital main control rooms is built by interviewing with operators. Furthermore, a quantitative assessment procedure of SA reliability is established, and the conditional probability distributions of influencing factors are more objectively determined on the basis of the full-size simulator experiment data of nuclear power plants and Bayesian theory. It is used to quantitatively calculate SA reliability, and a case is used to illustrate specific application of the established model. The results show that the model can provide more reliable data and a standardized analysis procedure to support operators' SA reliability evaluation in digital nuclear power plants.
To the issue of dynamic risk assessment of toxic gas leakage accidents, based on the cellular automata (CA) theory, dynamic prediction model of toxic gas concentration is formulated at first, and then dynamic individual risk and social risk assessment model of toxic gas leakage accidents is further constructed based on the dynamic prediction model. At last, the proposed model is verified through the simulation of one chlorine gas leakage accident. The results show that the proposed model can simultaneously predict the impact scope of the toxic gas leakage accidents and evaluate the toxic gas concentration, the individual risk, the social risk at any location in the impact scope in real time efficiently, and the dynamic individual risk and social risk assessment outcomes of the toxic gas leakage accidents, which can be obtained through the proposed model in this paper, can provide scientific and effective supports for the emergency evacuation and reasonable emergency resource allocation.
It needs long time to predict radioactive contaminant diffusion in receiving water by using mechanism model based on computational fluid dynamics, which is not applicable in emergency situation under accident condition. In order to shorten the computation time, a new artificial neural network model that combines species transport equation which governs contaminant diffusion and neural network model is proposed, and an improved particle swarm optimization algorithm is used to obtain the weight and threshold values of neural network. In this paper, long half-life radionuclide diffusion in Fushui reservoir after a postulated accident happened in Xianning nuclear power station in Hubei Province is studied as a case. The result shows that this proposed model can basically predict the contaminant diffusion trend, and the prediction result fit well with CFD simulation output. Compared with the conventional black box neural network model and the ones with priori knowledge obtained from data monotone, the priori knowledge obtained from equation of physical mechanism is a stronger constrain, which can make the prediction result more close to the simulation output.