I go to school at a University in the US. We use a competitive grading system where teachers give grades based on how you do relative to other students. This type of grading system doesn't encourage students to work together or help each other because we are all competing against one another. I'm wondering if there is another non-competative grading system that major universities are using to eliminate this competition?
What you're calling a competitive grading system is known pedagogically as norm-referenced assessment. There are a number of ways to do this, but is based in the principle that a sudent's performance is judged in reference to other students.
The opposite is known as criterion-referenced assessment. Its name is quite appropriate — you set up criteria. If the student meets the criteria, they get the grade.
Rubrics are one way to do this that gives off a strong air of fairness (although sometimes might not jive with what holistically makes sense…see clustro's answer for one way around that).
But most people I know think when designing the methods of assessment think about three things (maybe not consciously, though):
- What performance would indicate that a student is competent in the material I have taught, such that they can adequately take follow-up courses and succeed there? (grade: C)
- What performance would indicate that a student has mastered the expectations for the course? (grade: B)
- What performance would indicate that a student has gone beyond mastery of the basic expectations for the course? (grade: A)
For example, let's say we have an exam on the preterite in Spanish (where there are both regular and irregular verbs). A C-level student ought to get most, but not quite all regular verbs, and have mixed performance on irregulars. A B-level student ought to have all the regular verbs, while having mixed performance on the irregulars. An A-level student would excel in all, consistently.
On the other hand, a D-level student might demonstrate cursory knowledge of verb formation, but not be able to apply it any remotely consistent manner. And an F-level student would be clueless.
What criteria you use will depend greatly on the topic, and there are many ways to design exams. For the aforementioned verb test, you might give 60-70% regular verbs and 30-40% irregular. Based on the criteria given, you could expect grades to fall in the appopriate letter category (based on a 10pt scale).
For a longer mathematics problem, you might distribute points in a rubric over things like, does the student how to set up the problem? do they know how to solve it? were the calculations accurate? Obviously a student who can understand how to set it up and solve it, but makes a few calculation errors has met the criteria for passing — but they haven't demonstrated the perfection needed to get the highest level of achievement.
A good instructor will constantly reevaluate the criteria (to see that they meet the needs of the course) and the assessment (to see that student performance numerically lines up with observed performance) and make adjustments as necessary.