Algorithms for the proportional rounding of a nonnegative **vector**, **and** for the bipro- portional rounding of a nonnegative **matrix** are discussed. Here we view **vector** **and** **matrix** rounding as special instances of a generic **optimization** problem that employs an additive version of the objective function of Gaffke **and** Pukelsheim (2007). The generic problem turns out to be a **separable** **convex** **integer** **optimization** problem, in which the linear equal- ity constraints are given by a totally unimodular coefficient **matrix**. So, despite the **integer** restrictions of the variables, Fenchel duality applies. Our chief goal is to study the implied algorithmic consequences. We establish a general algorithm based on the primal optimiza- tion problem. Furthermore we show that the biproportional algorithm of Balinski **and** Demange (1989), when suitably generalized, derives from the dual **optimization** problem. Finally we comment on the shortcomings of the alternating scaling algorithm, a discrete variant of the well-known Iterative Proportional Fitting procedure.

Mehr anzeigen
24 Mehr lesen

It is important to notice that the direction of the manifold cannot be changed through the intro- duction of inequality constraints. More specifically, a translation (case 2b), a general restriction (case 1b) or a dimension reduction (case 1c **and** 2a) of the manifold are possible, but never a rota- tion. This leaves us in the comfortable situation that it is possible to determine the homogeneous solution of an ICLS problem by determining the homogeneous solution of the corresponding uncon- strained WLS problem **and** reformulate the constraints in relation to this manifold. Therefore, our framework consists of the following major parts that will be explained in detail in the next sections: To compute a general solution of an ICLS problem (3.8), we compute a general solution of the unconstrained WLS problem **and** perform a change of variables to reformulate the constraints in terms of the free variables of the homogeneous solution. Next, we determine if there is an intersection between the manifold of solutions **and** the feasible region. In case of an intersection, we determine the shortest solution **vector** in the nullspace of the design **matrix** with respect to the inequality constraints **and** reformulate the homogeneous solution **and** the inequalities accordingly. If there is no intersection, we use the modified active-set method described in Sect. 5.2.2 to compute a particular solution **and** determine the uniqueness of the solution by checking for active parallel constraints.

Mehr anzeigen
140 Mehr lesen

Impressum: Herausgeber: Institut f¨ur Mathematik Universit¨at Augsburg 86135 Augsburg http://www.math.uni-augsburg.de/forschung/preprint/ ViSdP: Friedrich Pukelsheim Institut f¨ur Mathem[r]

26 Mehr lesen

The basis of the success of all these LVQ algorithms was the simple heuristic learning strategy of LVQ1 only by moving prototypes in a simple way (see Section 2.3). The major problem is that a stable behaviour **and** convergence is not guaranteed during the learning phase, but in most cases the simple LVQ1-algorithm leads to a reasonable classifier in an appropriate time **and** the solution is understandable for all users. Improvements were made to diminish the bad attributes of the basic LVQ1 approach. The most important **and** revolutionary improvement was the introduction of a differentiable cost-function in GLVQ by Sato **and** Yamada (see Section 2.4) - a quantum leap for those classification methods. On the one hand, due to a differentiable cost-function, a stable behaviour **and** convergence is satisfied **and** on the other hand more **and** more ideas can be realized (see Figure 2.29), like learning a problem-specific metric during the learning-phase. Both GRLVQ **and** GMLVQ have remarkable properties **and** enabled the user to understand why the classification is realized in such a good way or not - additional informations about the data can be found. Another advantage is that the concepts can be fitted to specific data. The GFRLVQ algorithm is the best example how to modify the GRLVQ algorithm to take the functional context of data into account.

Mehr anzeigen
123 Mehr lesen

The realization of this thesis would not have been possible if it were not for the assistance of numerous people. The author must therefore thank, first **and** foremost, his supervisors Professor emeritus Joachim Rosenmüller **and** Associate Professor Peter Sudhölter. Both have contributed enormously not only to the theoretical results but also to the presenta- tion, the questions which were investigated, to my financial, psychological **and** emotional support as well as contributing significantly to the enrichment of my stay in Germany in many other ways. I thank Claus-Jochen Haake for his assistance with many of my ques- tions concerning all sorts of aspects of game theory. I must also thank Matthias Schleef for his unrelenting assistance with my Latex **problems**, Yaron Azrieli for reading **and** cor- recting some of the section on fuzzy games **and** the members of EBIM **and** BIGSEM for their numerous discussions **and** exchanges of opinions.

Mehr anzeigen
77 Mehr lesen

Though overcoming this obstacle can be done in a surprisingly simple way, this approach can be easily overlooked. When designing the first PTAS for the Santa Claus scheduling problem on identical machines Woeginger [Woe97] stated: “Rounding large jobs to a constant number of distinct job sizes of course simplifies the problem, but there seems to be no way of integrating the small jobs into a dynamic program.” The idea to circumvent this problem is to abstract from specific small rate allocations **and** only consider the required total volume. As already exploited in the PTAS in Section 4.1.3.1, Lemma 4.9 does not impose any assumption about the structure of the set of small channels other than that their individual sizes are bounded by a given threshold ∗ := λd. This even allows us to replace them by a set of q := bV / ∗ c many channels of size ∗ **and** one channel of size V − ∗ q without affecting the approximation guarantee. We may further omit this last channel **and** lose at most an additive term of ∗ in the total data rate of at most one terminal.

Mehr anzeigen
135 Mehr lesen

In the first part of the thesis, we concentrate on quadratic programs as an impor- tant subclass of MINLP. In this problem class, the multiplication of two variables is the only source of nonlinearity **and** McCormick [McC76] already in 1976 provided a tight relaxation for the product of two bounded variables. This termwise relaxation, however, is weak when terms interact or when more constraints are present in the model. The so-called Reformulation-Linearization-Technique (RLT) [SA92] is well known to strengthen the relaxation of quadratic programs by capturing structure of the constraints. In a nutshell, the ideas is to multiply a linear constraint by a variable which yields product terms. Auxiliary variables representing the products of two variables are used to reformulate the constraint making it linear again. Different implementations for this method have different ways to handle product terms that appear only in constraints generated by this approach. We propose to project them out by replacing them with appropriate over- **and** underestimators, an approach that has not been described in the literature to the best of our knowledge. Within the work on this thesis, the author implemented the projected RLT cuts in the commercial solver CPLEX [IBMb], where they are enabled in the default settings for **optimization** **problems** with a non-**convex** quadratic objective in version 12.7.0. An overview about reformulations **and** relaxations for quadratic programs together with a presentation of (projected) RLT is given in Chapter 3.

Mehr anzeigen
411 Mehr lesen

in rotordynamics
Frank Strauß ∗ Vincent Heuveline † Ben Schweizer ‡
Abstract
We consider a shape **optimization** problem in rotordynamics where the mass of a rotor is min- imized subject to constraints on the natural frequencies. Our analyis is based on a class of rotors described by a Rayleigh beam model including effects of rotary inertia **and** gyroscopic moments. The solution of the equation of motion leads to a generalized eigenvalue problem. The governing operators are non-symmetric due to the gyroscopic terms. We prove the existence of solutions for the **optimization** problem by using the theory of compact operators. For the numerical treatment of the problem a finite element discretization based on a variational formulation is considered. Applying results on spectral approximation of linear operators we prove that the solution of the discretized **optimization** problem converges towards the solution of the continuous problem if the discretization parameter tends to zero. Finally, a priori estimates for the convergence order of the eigenvalues are presented **and** illustrated by a numerical example.

Mehr anzeigen
19 Mehr lesen

Within the empirical study, our original results include – evidence that large and tightly constrained problems beyond the limitations of ILP branch-and-bound can be solved directly from[r]

151 Mehr lesen

angle γ ∈ [0, 90 ◦ ], both rotations are toward the radial direction. A second rotation is applied as follows; thrusters T1 **and** T2 are rotated by an angle β ∈ [0, 90 ◦ ] about the North-axis (towards the East) **and** T3 **and** T4 at an angle β ∈ [0, 90 ◦ ] about the South-axis (towards the West). One configuration (A) used in this paper is obtained for γ = 45 ◦ **and** β = 90 ◦ , shown in Fig. 2. The arrows point in the direction of the acceleration that is exerted on the spacecraft by each thruster (which is opposite to the exhaust plume). With this choice for β **and** γ the thrust **vector** lies completely in the tangential, normal plane, which is a common choice for geostationary satellites. This configuration is similar as implemented on the Hispasat Advanced Generation 1 mission [13]. Another configuration (B) that is analyzed in this paper is obtained by choosing γ = 45 ◦ **and** β = 10 ◦ ,. Fig. 3 shows the projected thrust force vectors in the TN, RT **and** RN planes. Note that for this configuration the thrusters are all pointing away from the solar panels as well as away from the Earth-facing panel. We also use a reference configuration (REF) with four thrusters pointing respectively North, East, South **and** West (Fig. 1). Note that additionally the location of the thruster could be optimized, for example to support a combined orbit control **and** attitude control or momentum management strategy. The configurations used in this work assume each thrust force **vector** to pass through the satellite’s center of mass **and** attitude dynamics are ignored.

Mehr anzeigen
36 Mehr lesen

One of the first projection-based, distributed gradient meth- ods was published in [8]. However, the proposed method relies on double-stochastic **matrix**-updates, which restrict the communication to undirected graphs. The contributions in [4] **and** [15] rely on row-stochastic communication matrices for diffusing the projected gradient information. Finally, [14] employs the push-sum consensus combined with a projection that uses a **convex** proximal function. The major drawback of the mentioned projection-based methods in relation to the specific structure of the DEDP under consideration is the assumption that all constraints of the distributed **optimization** problem are known by every agent **and** therefore global. This restricts the privacy of the agents with local constraints. Compared to the projection-free method in [13], they have the advantage that no penalization parameter sequence needs to be chosen.

Mehr anzeigen
Syncom II was the first satellite to arrive in a geosyn- chronous orbit. Since that time many satellites followed, **and** in particular the geostationary orbit has become in- creasingly populated. Driven by the need to avoid radio- frequency interference between different satellites, the geo- stationary orbit was divided into slots, which are allocated by the International Telecommunication Union. The lim- ited availability **and** difficulty of obtaining these slots, es- pecially at key locations above highly populated areas, together with the ever increasing need for geostationary satellite services lead several organizations to collocate multiple satellites within a geostationary slot, see e.g. [1]. Collocation strategies used to control more than two satellites in one slot generally rely on a coordinated ap- proach to specifying desired states. The satellites’ desired mean eccentricity **and** inclination vectors are defined in a configuration that is passively safe. Each satellite in the fleet is then controlled individually to stay close to this de- sired state [2]. The idea is to maintain relative eccentricity **and** inclination vectors (anti-)parallel, to ensure that ra- dial separation is maximum when normal separation van-

Mehr anzeigen
11 Mehr lesen

The first special feature of the application is the presence of ties, **and** the solution concept of stable score-limits. According to the Hungarian admission policy, when two applicants have the same score at a programme then they should either both be accepted or rejected by that programme. The solution of stable score-limits ensures that no quota is violated, hence the last group of students with the same score that would cause a quota violation is always rejected. A set of stable score-limits always exists, **and** a student- optimal solution can be found efficiently by an extension of the Gale-Shapley algorithm, as shown in [9]. This method is the basis of the heuristic used in the Hungarian application. The second **and** third special features studied in this paper are the lower **and** common quotas. A university may set not just an upper quota for the number of admissible students for a programme, but also a lower quota. A violation of this lower quota would imply the cancellation of the programme. Furthermore, a common upper quota may be also introduced for a set of programmes, to limit the number of students admitted to a faculty, at a university or nationwide with regard to the state-financed seats in a particular subject. These concepts were studied in [7], where the authors showed that each of these special features makes the college admission problem NP-hard, even in the form that is present in the Hungarian application. Finally, students can apply for pairs of programmes in case of teacher education programmes. This possibility was reintroduced in the scheme in 2010. This problem is closely related to the Hospitals / Residents problem with Couples, where couples may apply for pairs of positions. The latter problem is also known to be NP-hard [24], even for unit-capacity hospitals, for so-called consistent preferences [21], **and** also for a specific setting present in Scotland [8] where hospitals have common rankings. The fact that the unit-capacity case is also NP-hard implies the NP-hardness of the paired application problem as well.

Mehr anzeigen
33 Mehr lesen

(8), with D = R p + :
For a more general approach to sucient rst-order optimality conditions see Giorgi, Jimenez **and** Novo (2008) . We have to note that condition (ii) is not very useful, as it is the same both for local ecient minimum points **and** for local ecient maximum points. In order to obtain useful conditions one has to impose some kind of generalized convexity (or concavity, in case of a maximum problem) on the objective function f (see, e. g., Cambini **and** Martein (1993, 1994) ).

25 Mehr lesen

Compared to other firm panels like Compustat or AMADEUS, AFiD holds a number of major advantages for our analysis. Unlike public accounting data, the Investment Survey **and** the Monthly Report provide very detailed information on the volume **and** composition of payrolls, investments, **and** sales revenues, **and** these data are collected at the establishment level. Since we complement the data with information on local tax rates, we are not only able to analyze correlations between payroll expense **and** tax rates, but also correlations with number of employees, number of working hours per employee, sales revenue, gross investment, **and** measures for tax avoidance (payroll per number of hours worked, payroll per unit of sales revenue) on the establishment **and** firm levels. To our knowledge, this is a unique feature, allowing us a more detailed analysis than in previous research. Both surveys are conducted as a mandatory census for all domestic establishments in the manufacturing **and** mining industries with at least 20 employees; therefore, non-response **and** sample selection are not issues. An additional advantage stems from the fact that the data are anonymized **and** available only for political **and** scientific use. Hence, there should be a smaller incentive for survey participants to “brighten the numbers” as in balance sheet information.

Mehr anzeigen
36 Mehr lesen

The German practitioner literature discusses a range of tax avoidance strategies to manipulate payroll expense as an FA factor (Dietrich & Krakowiak, 2009; Scheffler, 2011). To obtain a better understanding of tax avoidance opportunities in the German FA system, we rely on a series of qualitative interviews with a focus on tax-planning **and** audit procedures. Overall, we use 19 interviews with tax advisers, business taxpayers, staff members of tax authorities **and** staff members of municipal authorities. 7 Importantly, this qualitative research confirms anecdotal evidence of a weak tax audit system for FA purposes. Hence, the fiscal administration **and** the municipal authorities have only very limited opportunities to check the validity of the payroll distribution reported in a firm’s partition statement. This is partially driven by the fact that firms’ central offices administer employee contracts, while the establishment is not a legally distinct entity. As a result, there exist no official documents on the workplaces of employees **and** no official accounts on the distribution of payroll among establishments. Using the tax practitioner literature, as well as the findings of our own qualitative research, we were able to identify tax-planning strategies for FA factor manipulation. One example for such tax planning is that an establishment in a low-tax municipality leases employees to other establishments or subsidiaries of the same firm group that are located in high-tax municipalities. The other business units pay a leasing fee for the employees. However, this leasing fee is not considered a payroll expense under German FA rules. As a result, the FA-relevant payroll expense is paid in a low-tax municipality, while the hours worked are performed in a high-tax municipality. This increases (reduces) the weight of low-tax (high-tax) municipalities in the FA formula.

Mehr anzeigen
53 Mehr lesen

The third chapter deals with minors in the reconstruction conjecture **and** the edge- reconstruction conjecture. We pose the question whether containing a specific mi- nor or not is reconstructible or edge-reconstructible for certain graphs. The classes of graphs we investigate are based upon their connectness. We give a bound for 2-connected graphs **and** connected minors as well as we show that the problem is edge-reconstructible for a range of other graph classes. The chapter closes with an application of the results. In particular we were able to show that the Hadwiger num- ber **and** the treewidth is edge-reconstructible for a wide range of cases. We also give a bound for the edge-reconstruction of the Hadwiger number **and** the treewidth based upon the ratio between the order of the graph **and** the order of the minor. Further- more these results can easily be applied to various exluded minor theorems (a graph has a specific property if **and** only if it has no minors isomorphic to a specific set of graphs).

Mehr anzeigen
122 Mehr lesen

The negative **and** significant results for the Payroll per hour ratio **and** the Payroll per
revenue ratio document that the impact of TaxD on the FA factor payroll expense is
stronger than FA impacts on input **and** output measures. This constitutes evidence for the existence of more or less artificial tax avoidance strategies that do not result from a reallocation of real labor input (see also Section 2). Our regression results imply that tax avoidance may be an important element of the overall FA effect on payroll. Comparing the range of coefficients of tax avoidance measures (-0.545 to -0.929) with the effect on working hours (-0.633), tax avoidance may be responsible for a significant part of the overall impact of TaxD on Payroll share. 13 Considering that our tax avoidance measures are probably not appropriate to identify all tax avoidance strategies, tax avoidance may even be responsible for a major part of the FA impact. However, as it is very hard to measure tax avoidance under an FA system directly **and** consequently our proxies are indirect measures for tax avoidance, the results have to be interpreted with due caution. An alternative explanation for a tax-driven reduction of payroll expense without a corresponding change in the underlying real input measure (number of working hours) or output measure (sales revenues) might be tax incidence. It has been argued that businesses may impose the local tax burden on their employees by reducing gross wages (Fuest et al., 2013). However, taking into account the binding force of labor market contracts **and** the tariff commitment of most German industries (especially in the manufacturing sector), this does not seem to be a likely explanation for the strong immediate tax effects on the Payroll per hour ratio found in our paper. While employment **and** incidence effects of taxes should have delayed effects (Fuest et al., 2013, Siegloch, 2014), our regression results rather imply a rapid effect from the tax rate differential on Payroll share, the Payroll per hour ratio **and** the Payroll per

Mehr anzeigen
38 Mehr lesen

2006 ] are amongst the earliest **and** most frequently used techniques. A standard approach for the two-material case consists in setting u(x) = u 1 w(x) + u 2 (1 − w(x)) **and** minimizing over the set of all characteristic functions w(x) ∈ {0, 1}. This problem is non-**convex**, but its **convex** relaxation – minimizing over all w(x) ∈ [0, 1] – often has a bang-bang solution, i.e., w(x) ∈ {0, 1} almost everywhere. For multi-material **optimization**, this approach can be extended by introducing multiple characteristic functions; non-overlapping materials can be enforced by considering the third domain as an intersection of two (possibly overlapping) domains, e.g., u(x) = u 1 w 1 (x) + u 2 (1 − w 1 (x))w 2 (x) + u 3 (1 − w 1 (x))(1 − w 2 (x)) for w 1 (x), w 2 (x) ∈ [0, 1]. For an increasing number d of materials, this approach has obvious drawbacks due to the combinatorial nature **and** increasing non-linearity. Shape calculus techniques [ Pironneau 1984 ; Sokołowski **and** Zolésio 1992 ] focus on the effect of smooth perturbations of the interfaces on the cost functional **and** have reached a high level of sophistication. From the point of view of numerical **optimization**, they are first-order methods **and** stable, with the drawback that they mostly allow only smooth variations of the reference geometry. When combined with level-set techniques [ Allaire, Jouve, **and** Toader 2004 ; Ito, Kunisch, **and** Li 2001 ], they are flexible enough to allow vanishing **and** merging of connected components, but they do not allow the creation of holes. This is allowed in the context of topological sensitivity analysis [ Garreau, Guillaume, **and** Masmoudi 2001 ; Sokołowski **and** Żochowski 1999 ], which investigates the effect of the creation of holes on the cost. Let us point out that in our work we do not rely in any explicit manner on knowledge of the shape or the topological derivatives. Moreover, the numerical technique that we propose is of second order rather than of gradient nature. Second-order shape or topological derivative analysis is available, but it is involved when it comes to numerical realization. Multi-material **optimization** for elasticity **problems** are further investigated in [ Haslinger et al. 2010 ] by means of H-convergence methods **and** by phase-field methods in [ Blank et al. 2014 ]. The work which in part is most closely related to ours is [ Amstutz 2011 ], see also [ Amstutz **and** Andrä 2006 ; Amstutz 2010 ], where for the case of linear solution operators **and** two materials, the set of coefficients is expressed in terms of characteristic functions, **and** the resulting problem is considered in function spaces rather than in terms of subdomains **and** their boundaries. The first order-optimality condition is derived **and** formulated as a nonlinear equation for which a semi-smooth Newton method is applicable.

Mehr anzeigen
23 Mehr lesen

In case 3, the vapor boil-up rate VB, the reﬂux ratio set-point RR **and** purity of the fresh feeds (i.e., presence of other reactant in the fresh feed) are four uncertain disturbances acting on the column. The magnitude of these step disturbances follows the joint normal PDF with mean **and** covariance according to Table 3.3. The GBD based sequential solution approach demands 75 minutes of computation time using the sigma point method, since the complexity of the resulting MIDO problem is increased with the number of uncertain disturbances. The optimal control structure **and** controller parameters are summarized in Table 3.5 which are diﬀerent from the other two cases as well as the heuristic method **and** the nominal case. The expectation **and** the variance of the performance index at the optimal decentralized control system is compared with specialized cubature formula [57] **and** veriﬁed with random samples which are given in Table 3.9. Here, we use the specialized cubature formula for comparison purpose in stead of the specialized product Gauss formula due to the fact that it is the choice of a suitable numerical integration method for 3 ≤ n ≤ 7 [57]. However, it requires 24 number of grid points which is higher than that of the sigma point method. Further, the sigma point method estimates the expectation within 0.8% error around the random samples with 10000 observations which is shown in Fig 3.7. Like the previous cases, the optimal decentralized control system from the stochastic approach has superior performance compared to the heuristic method **and** the nominal case which is shown in Table 3.10 through the statistical objective function value. Further, we observed in this case also that not much improvement was found for a multivariable controller compared to the decentralized control system.

Mehr anzeigen
114 Mehr lesen