Results 1 to 3 of 3

Thread: Program Components - Partea subiectiva a notarii unui program

  1. #1
    Addicted 2Books Iulya's Avatar
    Join Date
    Nov 2006
    Location
    Middle Earth
    Posts
    5,418
    Reputatie
    1

    Program Components - Partea subiectiva a notarii unui program

    Sunt 5 categorii care compun punctajul "artistic"
    1. Skating Skills:
    2. Transitions:
    3. Performance/Execution:
    4. Choreography:
    5. Interpretation



    Skating Skills


    Definition: Over all skating quality: edge control and flow over the ice surface demonstrated by a command of the skating vocabulary (edges, steps, turns, etc), the clarity of technique, and the use of
    effortless power to accelerate and vary speed.

    Criteria:

    • Balance, rhythmic knee action, and precision of foot placement
    • Flow and effortless glide
    • Rhythm, strength, clean strokes, and an efficient use of lean create a steady run to the blade and an ease of transfer of weight resulting in seemingly effortless power and acceleration.
    • Cleanness and sureness of deep edges, steps, and turns
    • The skater should demonstrate clean and controlled curves, deep edges, and steps.
    • Varied use of power/energy, speed, and acceleration
    • Variety is the gradation – some of which may be subtle
    • Multi directional skating
    • Includes all direction of skating: forward and backward, clockwise and
    • counterclockwise including rotation in both directions.
    • Mastery of one foot skating
    • No over use of skating on two feet.

    Pair Skating and Ice Dancing: Equal mastery of technique by both partners shown in unison.
    Ice Dancing: Compulsory Dance – Ice Coverage



    Transitions/Linking Footwork & Movement


    Definition:
    The varied and/or intricate footwork, positions, movements, and holds that link all elements. In singles, pairs, and synchronize skating this also includes the entrances and exits of technical elements.
    Criteria:
    • Variety
    • Difficulty
    • Intricacy
    • Quality (including unison in Pair Skating and Ice Dancing)
    • Balance of workload between partners (Pair Skating and Ice Dancing)
    • Variety of Dance holds (not excessive side by side and hand in hand – Ice Dancing)
    Transitions can be short or long, including the use of blade, body, head, arms, legs as dictated by the music. (Minimum use of cross-cuts)

    Performance/Execution


    Definition:
    is the involvement of the skater/couple/teams physically, emotionally, and intellectually as they translate the intent of the music and choreography.
    Execution: is the quality of movement and precision in delivery. This includes harmony of movement in Pair Skating and Ice Dancing.

    Criteria:

    • Physical, emotional, and intellectual involvement -In all skating disciplines each skater must be physically committed, sincere in emotion, and equal in comprehension of the music and in execution of all movement.
    • Carriage -Carriage is a trained inner strength of the body that makes possible ease of movement from the center of the body. Alignment is the fluid change from one movement to the next.
    • Style and individuality/personality -Style is the distinctive use of line and movement as inspired by the music.
    • Individuality/personality is a combination of personal and artistic preferences that a skater/pair/couple brings to the concept, manner, and content of the program.
    • Clarity of movement Clarity is characterized by the refined lines of the body and limbs, as well as the precise execution of any movement.
    • Variety and contrast - Varied use of tempo, rhythm, force, size, level, movement shapes, angles, and, body parts as well as the use of contrast.
    • Projection -The skater radiates energy resulting in an invisible connection with the audience.
    • Unison and “oneness” (Pair Skating and Ice Dancing) -Each skater contributes equally toward achieving all six of the performance criteria.
    • Balance in performance (Pair Skating and Ice Dancing)
    • Spatial Awareness between partners – management of the distance between partners and management of changes of hold (Pair Skating and Ice Dancing)
    • The use of same techniques in edges, jumping, spinning, line, and style are necessary concepts of visual unison; both skaters must move alike in roke, and movement of all limbs and head with an equal workload in speed and power.(Pair Skating)

    Choreography / Composition

    Definition:
    An intentional, developed, and/or original arrangement of all movements according to the principles of proportion, unity, space, pattern, structure, and phrasing.

    Criteria:


    • Purpose: (Idea, concept, vision, mood)-To reward the intentional and quality design of a program.
    • Proportion (equal weight of all parts)-Each part and section has equal weight in achieving the aesthetic pursuit of the composition.
    • Unity – purposeful threading of all movements - A program achieves unity when: every step, movement, and element is motivated by the music. As well, all its parts, big or small, seem necessary to the whole, and there is an underlying vision or symbolic meaning that threads together the entire composition.
    • Utilization of Personal and Public Space -Movement phrases are distributed in such a way they communicate from every angle in a 360 degree skater-viewer relationship.
    • Pattern and Ice Coverage - Movement phrases are designed using an interesting and meaningful variety of patterns and directions of travel.
    • Phrasing and Form (movement and parts are structured to match the phrasing of the music)A phrase is a unit of movement marked by an impulse of energy that grows, builds, finds a conclusion, and then flows easily and naturally into the next movement phrase.Form is the presentation of an idea, the development of the idea, and its conclusion presented in a specific number of parts and a specific order for design.
    • Originality of Purpose, Movement, and Design -Originality involves an individual perspective of movement and design in pursuit of a creative composition as inspired by the music and the underlying vision.
    • Shared Responsibility of Purpose (Pair Skating, Ice Dancing, and Synchronized)-Each skater has equal roles in achieving the aesthetic pursuit of the composition with equal steps, movements, and a sense of purpose in unifying the composition.

    Interpretation

    Definition: The personal and creative translation of the music to movement on ice.To reward the skater who through movement creates a personal and creative translation of the music. As the tempo binds all notes in time, the ability to use the tempos and rhythms of the music in a varietyof ways, along with the subtle use of finesse to reflect the nuances of all the fundamentals of music: melody, rhythm, harmony, color, texture, and form creates a mastery of interpretation.

    Criteria:

    • Effortless Movements in Time to the Music (Timing) Note: Timing is a separate component in Compulsory Dances.The ability to translate music through sureness of rhythm, tempo, effective movement, and effortless flow over the ice surface by: rhythmic continuity, awareness of all tempo/rhythm changes in a variety of ways.
    • Expression of the music’s style, character, and rhythm -Maintaining the character and style of the music throughout the entire program by use of
    • body and skating techniques to depict a mood, style, shape, or thematic idea as motivated by the structure of the music: melody, harmony, rhythm, color, texture, and form. The total involvement of the body and being should express the intent of the music.
    • Use of finesse to reflect the nuances of music. - Finesse is the skater’s refined, artful mani****tion of nuances. Nuances are the personal, artistic ways of bringing subtle variations to the intensity, tempo, and dynamics of the music made by the composer and/or the musician.
    • Relationship between the partners reflecting the character of the music. Interpretive unison is an equal partnership with the same degree of sensitivity between partners not only to the music, but also to the equal understanding of the music’s nuances. There is an intimacy between the partners that is characterized by a feeling of “surrender” to the music and possibly to each other that creates an entity greater than the two of them.
    • Appropriateness of music (original dance and free dance)
    Last edited by Iulya; 4th April 2008 at 19:02.
    [B][I][CENTER]Zoltan Kelemen/Nathalie Pechalat & Fabian Bourzat/Alena Leonova/ Anna Cappelini & Luca Lanotte/Florent Amodio/Aliona Savchebko & Robin Szolkowy/Alexander Majorov/[/CENTER][/I][/B]




    [CENTER] Anybody can root for a winning side. It takes character to stick with the underdogs![/CENTER]

  2. #2
    Addicted 2Books Iulya's Avatar
    Join Date
    Nov 2006
    Location
    Middle Earth
    Posts
    5,418
    Reputatie
    1
    [B][I][CENTER]Zoltan Kelemen/Nathalie Pechalat & Fabian Bourzat/Alena Leonova/ Anna Cappelini & Luca Lanotte/Florent Amodio/Aliona Savchebko & Robin Szolkowy/Alexander Majorov/[/CENTER][/I][/B]




    [CENTER] Anybody can root for a winning side. It takes character to stick with the underdogs![/CENTER]

  3. #3
    Addicted 2Books Iulya's Avatar
    Join Date
    Nov 2006
    Location
    Middle Earth
    Posts
    5,418
    Reputatie
    1
    Absolute Marking of Program Component Scores
    How absolute does it have to be?

    The IJS calculation method is based on having the skaters marked on an absolute point scale for element scores and Program Component scores. When evaluated during training, judges are expected to give marks that agree with the official panel. In ISU competitions, judges are expected to mark within a defined range (corridor) and the correctness of their judging is evaluated based on being within the corridor. Judging panels are frequently criticized when event protocols show Program Component marks spread over a large range for a given component.

    IJS scores are calculated to the nearest 0.01 points. In a perfect system, the calculated scores must then be correct to better than 0.005 points for every skater so that every place is correctly determined. Thus far, it is well established that IJS does not yet meet that mathematical standard for accuracy, precision, and hence fairness. The calculation method has inherent flaws that results in rounding errors that are typically several hundredths of a point, and can potentially be as large as many tenths of a point. The spread of marks among the judges due to random differences of opinion is even larger, with a standard deviation typically ¾ of a point in any one event segment. Thus, both rounding errors and random errors are already significantly greater than 0.01 point, demonstrating the IJS calculation method is currently far from a perfect system.

    In addition to round-off error, and random judges’ error, the IJS calculation method is also affected by variations in the judges’ individual marking standards, errors by the Technical Panel, and errors in the Scale of Values.

    Examination of event protocols shows that judges clearly do not all mark to the same absolute standard. Each judge has their own individual marking standard that departs to some extent from the ideal standard (which itself isn’t well defined). The issue we examine here is the effect of inconsistent individual marking standards on the calculation of results as a source of error in the calculation of IJS results. The issues of errors by the Technical Panel, and error inherent in the SoV will be discussed in future papers.

    Just how consistent do the judges’ individual marking standards really have to be?

    We begin with marks subject only to random and then expand that to marks with both random and systematic errors. Those bored by math can jump from here to the Summary section at the end.

    Let:

    mij be the mark judge "i" gives skater "j"

    Mj be the correct mark skater j deserves as specified by the rules, if judges had perfect knowledge of the rules, perfect observational skills, and perfect judgment; i.e. the "truth" mark.

    Dij be the difference between the mark judge i gives skater j and the truth mark for skater "j".

    Then:

    Dij = mij - Mj

    or,

    mij = Mj + Dij (1)

    Equation (1) assumes that the judges are all marking on the same numerical scale to the same standard, and thus the only errors in the marks are differences of opinion from one judge to the next because the judges do not have perfect knowledge of the rules or skating, perfect observational skills, or perfect judgment.

    In the calculation of scores, the marks for the judges are averaged for each skater j. We use "< >" brackets to represent an average over the judges for a given skater.

    In averaging the marks the judges actually give, equation (1) becomes

    <mj> = Mj + <Dj> (2)

    That is, the average mark for skater j is the correct mark plus the average of all the errors for that skater. If the errors in the marks (Dij) are random, the average of the errors will decrease as the number of judges increase in inverse proportion to the square root of the number of judges. As the number of judges becomes infinite the average error goes to zero and the average of the actual marks approaches the truth mark.

    In comparing the marks for two skaters (j and j+1) the difference in calculated score will be:

    <mj> - <mj+1> = (Mj - Mj+1) + (<Dj> - <Dj+1>) (3)

    Whether the two skaters finish in the correct order depends on the differential error for the two skaters, (<Dj> - <Dj+1>).

    Since results are determined to the nearest 0.01 points, the judges must be adequately trained and the number of judges used so that the differential error term is less than 0.01 point. For well trained judges, reducing the differential error term requires the use of tens of thousands of judges, or greater.

    Because only 5-9 marks are typically used in calculating IJS scores, and not tens of thousands, the differential error term is typically much greater than 0.01, and errors in the order of finish are common in IJS calculations. Analysis of IJS results shows that typically one-third to one-half the results in IJS competitions are not statistically significant (i.e., the statistical uncertainty in the marks is greater than the point difference between sequential places), with the random error typically about &#190; of a point in an event segment.

    In addition to random errors, judges are not equally calibrated (trained), and do not all mark using the same numerical standard. For example, one judge may mark all skaters in a group systematically higher than another judge, or one judge may mark a group using a greater or lesser span of marks than another judge. These systematic differences in calibration also introduce errors into IJS results.

    To examine the effect of systematic errors due to differences in marking standards we model the calibration of the judges as follows, with equation (1) becoming:

    mij = Ai + Bi * Mj + Ci * Mj2 + Dij (4)

    where,

    Ai is the "offset" for judge i’s personal marking scale (judge i marks all skaters in the group higher or lower than the correct absolute standard)

    Bi is the "gain" for judge i’s personal marking scale (judge i uses a spread of marks greater or less than the correct absolute standard)

    Ci is one of several potential non-linear terms that accounts for judge i using a span of marks that is different for the lower scoring skaters compared to the higher scoring skaters. In principle, there could be terms of higher order of Mj, but these will be omitted to avoid cluttering up the discussion.

    [What we are doing is taking the individual marking standards of the judges and modeling them as a polynomial expansion, and keeping only the first three terms for the purpose of this analysis.]

    If a judge is correctly calibrated and marking on the correct absolute marking scale, then for that judge Ai = 0.0, Bi = 1.0, and Ci = 0.0.

    If we average equation (4) over all marks for skater j, we obtain:

    <mj> = <A>+ <B>* Mj + <C>* Mj2 + <Dj> (5)

    where,

    <A> is the average A coefficient for the panel

    <B> is the average B coefficient for the panel

    <C> is the average C coefficient for the panel.

    Comparing the marks for two skaters (j and j+1) the difference in calculated score will be the difference between the following two equations:

    <mj> = <A>+ <B>* Mj + <C>* Mj2 + <Dj>

    <mj+1> = <A>+ <B>* Mj+1 + <C>* Mj+12 + <Dj+1>

    which gives,

    <mj> - <mj+1> = <B>* (Mj - Mj+1) + <C>* (Mj2 - Mj+12)+ (<Dj> - <Dj+1>) (6)

    If <B> = 1.0, and <C> = 0.0, then equation (6) reduces to equation (3).

    Note that the A coefficients do not affect the point differences between any two skaters. Judges may mark on individual numerical scales with completely different offsets and the results are not affected.

    Thus, while it is cosmetically appealing to try and calibrate every judge to give the same mark for a given skater, the reality is that differences in offset are meaningless. For example, if most of a panel marks a group of skaters from 3.50 through 6.75 and one judge marks the same group from 1.50 through 4.75, the fact the judge marked 2.00 points lower is in itself no measure of poor judging, and has no impact on the results.

    The B and C coefficients, however, are another matter.

    For the B coefficients (the linear term) a comparison of equations (3) and (6) shows that the error introduced when the judges use different gain factors (spread of marks from highest to lowest scoring) the following error is introduced in the point difference between two skaters:

    LinearError = (<B> - 1) * (Mj - Mj+1) (7)

    Since results are determined to the nearest 0.01 points, we can place limits on how close to 1.00 <B> must be.

    For example, in an event where the typical difference between two sequential places is 5.0 points in PCS, if we require,

    | LinearError | < 0.01

    then we must have,

    0.998 < <B> < 1.002

    That means, on the average, the span of the marks used by the judges must agree to within 2/10 of one percent (2 parts in 1000). For example, if the judges use a span of 5.0 points in each Program Component for a given event, the spans used by each judge must agree on the average within 0.01 points (0.002 * 5.0). In other words, they must agree nearly perfectly.

    We conclude, then, so far as training and evaluating the judges is concerned, what matters is not whether the judges marks on the same scale overall (same average score for the group), but whether the judges use the same linear span of marks from highest to lowest scoring skater.

    To illustrate this, suppose on a five judge panel, four judges are perfectly calibrated and use as span of 5 points in scoring each Program Component. The fifth judge, however, uses as span of 5.50 points. This 10% greater span by one judge results in an average B coefficient of 1.02 and introduces a point error into the results that will results in place switches for a small but non-zero number of skaters.

    Using the same approach for the C coefficient, the quadratic error is given by,

    QuadraticError = <C>* (Mj2 - Mj+12) (8)

    and for the absolute value of this error to be less than 0.01 points, for typical Senior Men competition results this requires,

    | <C> | < 0.00025

    This requirement says, on the average, if the individual marking standard for each judge departs even negligibly from a linear relation, an error will be introduced that can result in some place switches. Over the full range of allowed Program Component marks, for example, a C coefficient as high as 0.01 would hardly be noticeable in cursory inspection of the marks, yet it has the potential to introduce an error in the marks as large as 1.00 point in an event with a short and long program.

    Since the errors introduced by individual marking standards among the judges depends on the average of the B and C coefficients, one way to reduce these errors is again to use a large number of judges to average out the different marking standards, assuming high and low individual marking standards are equally common. Unfortunately, like random errors, thousands of judges are required to reduce the average B and C coefficients to values where the point error is less than 0.01 point. On the other hand, if all judges are trained systematically high or low, then increasing the number of judges does nothing to remove this source of error.
    Summary

    Individual marking standards among the judges is a source of error in the calculation of IJS results. This source of error is sufficiently large (tenths of a point to a full point) to result in placement errors in competition results.

    Offset errors (differences in average score for a group) do not introduce errors in placement. A judge may mark an entire group outside the corridor and introduce NO error into the final results. Evaluating the judges based on a simple corridor alone is not an insightful process.

    Gain errors (differences in span of marks from highest to lowest skater) can introduce significant errors into scores and lead to errors in placement in the final results. The span of marks for each judge needs to agree to better than 0.2% for this source of error to be less than 0.01 point. In evaluating judges the span of marks the judges use is of far greater importance than the corridor in which those marks lie.

    Non-linear errors (lack of linearity over the span of marks, such as different marking scales used for lower scoring skaters vs. higher scoring skaters, or drift in judgment creeping higher or lower during an event) can introduce significant errors into scores and lead to errors in placement in the final results. Even minute departures from a linear marking standard can introduce place altering errors. In evaluating judges, consistency (linearity) of marking standard from low marks through high marks is of far greater importance than the corridor in which those marks lie.
    Conclusions

    Use of the corridor to train and evaluate judges is a simple minded approach that does not serve the judging community well to produce, evaluate or maintain quality judges. A more sophisticated process needs to be put in place if the quality of judging is to be improved.

    Consistency of judgment throughout an entire event (correct span of marks, linearity, and lack of marking drift) is essential to prevent placement errors due to systematic errors. To achieve this, IJS requires both absolute and relative judgment. Forbidding a direct comparison of the performances in an event is a serious flaw in the IJS judging process that makes IJS scoring less accurate and less consistent, rather than otherwise.

    Training and evaluation of judges should focus less on absolute agreement of Program Component values and more on the span of marks used and consistency of judgment throughout an event. However, the goal of obtaining a pool of judges who all mark to the same standard, is likely never to be reached so long as the absolute standard for each Program Component criterion remains as vague and obscure as it currently is. So long as judges lack specific guidance for what marks should be given in specific situations, marking to a consistent absolute standard will remain an elusive goal.

    Given the current limitations that prevent IJS from determining the value of a program to the nearest 0.01 point, choosing winners to that level of precision is an arbitrary and capricious process. Given current technical limits, either scores need to be rounded off to the nearest whole point, or changed must be made to IJS to more cleanly separate the scores for skaters with very similar degrees of skill.

    http://www.iceskatingintnl.com/curre...%20Marking.htm
    [B][I][CENTER]Zoltan Kelemen/Nathalie Pechalat & Fabian Bourzat/Alena Leonova/ Anna Cappelini & Luca Lanotte/Florent Amodio/Aliona Savchebko & Robin Szolkowy/Alexander Majorov/[/CENTER][/I][/B]




    [CENTER] Anybody can root for a winning side. It takes character to stick with the underdogs![/CENTER]

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •