exercism / v3

The work-in-progress project for developing v3 tracks

Home Page:https://v3.exercism.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The wider plan for v3. Comments requested!

iHiD opened this issue · comments

Below you will find a general update on some of the changes we're making as part of Exercism's evolution in v3. It has been written to provide a starting point for feedback and discussion amongst our product team, maintainers, and power-users who have used Exercism for a while. Once we have received and worked through feedback and resulting discussions, we will turn this into a more formal document that will form a "spec" for v3. We will then start more detailed product-work on each area.

The more feedback the better at this stage. If you feel like I've missed something that I've previously said will be addressed in v3, please raise that so I can add clarity. If there is something that is unclear or feels wrong, please tell us. If you feel like something doesn't fit together, we want to know now.

For every piece of feedback, or any discussions you would like to start, please open a separate issue in this repo using this issue template. This will avoid us having huge multi-stranded conversations on one issue. This issue itself is locked. Please note, I am looking for feedback on the content, not spelling/grammar etc of this document :)

Thanks in advance!

(cc @exercism/track-maintainers @exercism/alumni @exercism/website-copy @exercism/staff)


Plan for v3

Introduction

Exercism v3 is the third major iteration of Exercism. In it we aim to address some of the problems we have faced in v2, specifically in two areas:

  1. The limitations hit by the ratio of mentors vs students, and the timeliness of feedback
  2. The difficulties caused by not having purpose-built, well-ordered exercises to teach a language

This document aims to be a relatively comprehensive list of the product changes we will be making to Exercism as part of v3. Each area will require product work and there are questions outstanding for all of them, but at a high-level everything should logically fit together into a frameork that will allow Exercism to grow indefinitely.

High-level overview

There are some key concepts that permeate all of the changes to v3.

Distinguishing between learning and practicing

The fundamental premise of v3 is that learning and/or mastering a langauge involves:

  • Learning of the language's ideas, concepts and idioms
  • Practicing and exploring using those concepts together
  • Learning from the wisdom/knowledge of more experienced users of that language.

In v3 we are treating all three of these areas as distinct. Each will form part of the progressing through a track and each will have distinct user experiences.

Encouraging contributions through reciprocation

Exercism relies on people contributing their time to help others. There are lots of ways to motivate that from the most extrinsic of motivations (e.g. paying people) to the most intrinsic (e.g. a selfless desire to help). We are aiming to solicit contributions by targeting those intrinsic motivations of wanting to help others, and wanting to learn through helping, but also by making a system that rewards those who help others by meaning they themselves will be more likely to get help - a system built about reciprocation and fairness.

Track Structure

Currently, v2 has the idea of Core and Practice exercises. A big change from v1 to v2 was to add this structure with the goal that Core exercises would provide the basic scaffolding for learning a language and that Practice Exercises would allow people to practice and experiment. However, as the exercises themselves are basically the same (taken from the same pool of pre-existing exercises), this has been less effective than we had hoped. For most tracks, the Core exercises have been selected and ordered to be a good on-ramp for students learning that language, but the lack of purpose-built exercise has caused significant difficulties in both solving and mentoring.

In v3, we will make this distinction between learning and practicing much more clear, with the creation of Concept Exercises, which replace Core exercises.

Concept Exercises

What do we mean by concepts?

By concepts we mean things that a programmer would need to understand to be fluent in a language. We care specifically about how languages are different. What do I need to learn differently about numbers in Haskell to numbers in Ruby to be able to work with numbers in those languages. Two questions that we have felt useful to ask to establish this are:

  • If someone learnt Ruby, and someone learnt Haskell, what are the things that the two people learnt that are different?
  • If a Ruby programmer learnt Haskell, what new concepts would they have to learn, what knowledge would they have to disregard, and what syntax would they have to remap?

By teaching concepts we aim to teach fluency.

What do we mean by "fluency?"

By "Fluency", we mean: Do you get that language? Can you reason in that language? Do you write that language like a "native" programmer writes in it? Fluency is the ability to express oneself articulately in a particular language.

"Fluency" is different to "Proficiency", where we use proficiency to mean: Can you write programs in the language (e.g. can you nav stdlib, docs, compose complex code structures).

Exercism focuses on teaching Fluency not Proficiency. We aim to teach people to understand what makes a language unique and how experienced programmers in that language would reason about - and solve - problems.

How are Concept Exercises designed and structured?

Concept Exercises must have the following characteristics:

  • Each one has a clear learning goal.
  • They are language-specific, not generic.
  • Stubs/boilerplate are used to avoid the student having to learn/write unnecessary code on exercises.

Exercises are unlocked based on concepts taught and learnt. Each Concept Exercise must teach one or more concepts. It may also have prerequisites on Concepts, which means it will not be unlocked until Concept Exercises teaching those prerequisite concepts have been completed.

Concept Exercises should not inherently become more difficult as the track progresses. A seasoned developer in Language X should be able to work through all the Concept Exercises on that track spending no more than 5-10 minutes solving each one. As each exercise should be focussed on getting someone to use a concept for the first time, and because the seasoned developer already understands that concept in Language X, the exercise should feel relatively trivial for them. Later exercises may feel more difficult to a developer unfamiliar with Language X, but only because the later exercise is teaching a concept which in itself is more complicated (for example, most people would agree Recursion is a more complex topic to learn for the first time, than a Loop is to remap from one language to another).

For example, we might define a concept of "Classes" and provide a short introduction that explains what a class is, how it fits with objects, state, etc. We might include a link to a good article introducing OOP and classes. Individual tracks implementing an exercise on Classes can then include this introductory text, making any changes or additions that explain their language-specific semantics and syntax.

Concept Exercises will not be mentored. We will provide automated-feedback if possible (see more about automation below), but if no feedback is found, then the exercise will be marked as approved once the tests pass. This shifts the burden of teaching to the exercise, which must provide a clear pathway to learning the concept that is being taught.

Practice Exercises

The aim of a Practice Exercise is to provide a problem, which the student has enough knowledge to solve, and then let them discover the best way to solve it. To ensure the student has enough knowledge to solve it, each Practice Exercise will each have pre-requisite concepts, and only become unlocked once those concepts have been taught by Concept Exercises.

Once a student has submitted a Practice Exercise they will have two options for learning more:

  1. Learning about different Approaches that can be taken.
  2. Receiving Mentoring.

Both of these will be outline more below. It is envisaged that for most exercises, people will be able to learn most of what they need from Approaches, and that they will reach out for mentoring only when something is unclear or they want to have their solution looked at by a more experienced developer in that language.

Practice Exercises will not have approval. We do not anticipate that Practice Exercises will be able to unlock each other.

The initial pool of Practice Exercises will be comprised of all the existing exercises.

Automated feedback

In v3 automated feedback will be give on both Concept Exercises and Practice Exercises. For Concept Exercises this feedback might have compulsory changes that must be made before a student can continue onto the next exercise.

Feedback will be provided via two methods:

  1. Analyzers
  2. Representers

Analyzers

Analyzers are pieces of software that analyze code looking for known approaches and then determining feedback that should be given. They are useful for quickly reducing a wide range of solutions based on one or two overarching features. For example, in the TwoFer exercise, if someone sets the default parameter to null, we can almost always give useful feedback about that without analyzing the rest of the solution, so we can cover 50% of solutions with one simple check.

Representers

Each track can provide a representer which takes a submission and returns a normalised representation of it - normalising things such as whitespace and variable names- which dramatically reduces the variance in solutions. We can then take feedback provided to a submission and apply the same feedback to all the other submissions that normalise to the same representation.

We will build a bank of feedback to be given to representations. Whenever a solution is submitted that normalises to a known representation we can then give the same feedback automatically. This works because the expected number of solutions to Concept Exercises will be relatively low, due to the narrower designs of the exercises and test-suites.

No longer will our knowledge be solely in the heads of our mentors, but it will be encoded into our product, instantly usable by future students.

Feedback on Concept vs Practice Exercises

Concept Exercises are designed to have a single (or very small) amount of optimal solutions. Feedback should push a student to that implementation. They are also designed so that there are few solutions that can pass the tests, meaning the expected amount of possible submissions should be low. This means that both Analyzers and Representers should be highly efficient for Concept Exercises.

For Practice Exercises, there is not expected to be a "optimal" solution, and in fact part of the enjoyment of Practice Exercises is considering the tradeoffs to various approaches people take. This means, unlike Concept Exercises, there will be a wide array of solutions that people will submit and a wide range of potentially "correct" solution to push towards. This makes automated feedback much harder to execute well, and so we envisage that for Practice Exercises, automation will be used more to automatically group solutions into Approaches and less to automatically provide feedback.

Mentoring

In v3 the focus of human feedback narrows from teaching to mentoring. The Concept Exercises should teach concepts, and our mentors should mentor students on how to use these concepts together in the most idiomatic ways in their Practice Exercises. The aim is that most mentoring conversations will therefore be unique, focussing on the specific challenges that a student faces, rather than copy/pasting the same sort of information on the same exercise over and over. As part of this, we are splitting mentoring into three different types:

  1. "I'm stuck": If a student is stuck on an exercise and can't get the tests passing, they can request a mentor to help them. We are exploring how this could be synchronous and also whether it may be appropriate for Concept Exercises as well as Practice Exercises.
  2. "I have a question": If a student has a specific question they want help with about their code, then can ask for someone to help them with it. The aim here is that the interactions should be quite short and simple for a mentor. Someone is confused about X - I'll help explain it to them.
  3. "I would like code review": This is what currently happens on Exercism. Someone submits a solution and a mentor comes along and reviews the code in general, without being guided in any specific areas that a student might want help with. This is great for helping with the problem of "you don't know what you don't know". Whereas students asking questions have highlighted something that is unclear to them, students requesting code review are saying "I'm open to any feedback on how I can make this better".

By splitting mentoring into these three areas (and potentially in time more areas), we hope that students will receive the type of feedback that they desire, and that mentors will be able to more easily choose how to use their time in whatever ways they find most enjoyable. We also hope that because the students will have been exposed to concepts before reaching Practice Exercises, there will be less chance of them feeling overwhelmed, and that Approaches provide a first layer of useful information before the student reaches out for a mentor.

We will also provide mentors with more information about the student. Students will be asked to fill in some background about their experience before requesting mentoring, and mentors will be able to see which Concepts a student has already learned on that track.

Mentor pairing

One area lacking in v2 is the ability for mentors and students to pair up for future solutions. We will add this functionality to v3.

At the end of a mentoring interaction both the student and the mentor will be asked if they would like to interact again.

  • Both student and mentor will be given the option to block the other. If either select it, then the mentor won't see the students submissions again.
  • The student will be given the option to add this mentor to their "favourites". If the mentor also adds the student to their "favourites" then the student next submits, the mentor will be notified about the new solution.

While the UI will be worked out later, the general plan is for mentors to have have a simple "favourite"/"unfavourite" button inline during a conversation, which they can press if they're enjoying/not-enjoying chatting to someone. A mentor will likely choose a default of "favourite everyone" or "favourite no-one" with a default of "favourite no-one".

When a mentor views the lists of solutions to mentor, they will be able to filter by their favourite students.

Mentors will also see one of the following on all future solutions: [not-mentored-before, mentored-before, mentored-before-and-favourited]. The student and mentor shouldn't know if the other favoured the other, so this will purely e based on a mentor's favouriting decision.

We have also considered the idea of fixing a student with a mentor and rejected it for the following reasons:

  • Receiving feedback from multiple mentors is something we think is a helpful principle we want to enforce. If a student and mentor choose to "pair up" they could create a team to do this, but I don't think it's something we want to promote.
  • Most students want a timely response. The fact that a student is pushing their solution to only one mentor means that the student will almost certainly have to wait longer than if they push to a general bank of mentors, but mentors are notified.
  • It adds loads more pressure to mentors. You know that students are waiting for you, and even if we time-out after a few days, it makes it much harder for the significant proportion of mentors who spend a focused few hours once a week doing a batch of mentorings.
  • We don't want students to know whether mentors have also favourited them. That adds too much risk for conflict/frustration.

Credits / Reputation / Privileges System

To help with ensuring that those that give the most can also receive the most value from Exercism we are going to introduce a Credits System. For each positive contribution you make to Exercism (e.g. GitHub contribution, mentoring, adding automated-feedback) you will receive a certain amount of credits. When someone requests mentoring, they will "spend" those credits on their mentoring request (working name of "bounties") - with the mentor who provides the answer receiving the credits. Users who acquire lots of credits will therefore be able to attach a lot of those credits to their mentoring request, making it more appealing for a mentor to mentor, as they will receive those credits, which they can then use to improve their own likelihood of receiving great mentoring. We will have limits on how many credits can be attached to a mentoring request, how many mentoring requests can be made per day, and how many credits can be spent per day.

When users acquire Credits, they will also acquire Reputation. Reputation cannot be "spent" in the same way that credits can - it accumulates - but as Reputation grows, users will unlock Privileges in Exercism's ecosystem. For example, users with a high reputation for mentoring will be able to take on mentoring requests with higher "bounties", and they will be able to write automated feedback on representations, which then generates more Credits for them every time that feedback is provided to a user in the future.

Reputation will show up on profiles, split by language and contribution-type (e.g. Ruby Contributor: 200, Ruby Mentor: 700, Ruby: 900, Total: 1,110).

Users will be provided with a temporal, fixed amount of Credits each day, which do not accrue the next day. This allows someone to request mentoring every day with the minimal bounty, even if they have spent all their existing Credits.

Approaches

Learning from other people's solutions is a fantastic way to understand where you might be going wrong or realise areas that you don't fully understand. However, with thousands of solutions to each exercise, it can be very hard to understand what is worth looking at. In v3 we are going to introduce Approaches, articles/posts on different approaches people take to an exercise, linked to the cohort of solutions that are examples of this. Students will be able to explore the different approaches once they've submitted a Practice Exercise, reading explanatory text and browsing the actual code other people have written. Over time, we aim to add code that says "Your solution fits into this approach".

Approaches will probably be wiki-based articles, but with some system of ownership and some sort of Wikipedia-style "Talk Page" where people can discuss edits to them. We will base the initial Approaches on the mentor-notes for current Exercises, turning them into Student-facing resources. Authoring or editing Approaches will be a way of generating Credits.

Teams

Currently in v2, teams is a separate product. For v3 we will be bringing the functionality inside the main website. Users will be able to organise themselves into teams and submit Practice Exercises to their teams for review, rather than submitting to the general mentoring pool. Teams will be able to mark some members as "reviewers" and others as "students", where only "reviewers" receive mentoring requests. No Credits will be acquired for mentoring within teams.

In-browser coding

We will be adding the option for students to solve exercises within their browser, rather than having to use the CLI to submit from their local machines. The existing workflow will remain, but this addition will allow students to try a language without having to install a whole toolchain on their machines. We are prototyping the technology behind this in our Research Experiment. The actual UX around this needs lots of work to determine how it fits into the general flow of Exercism.


This issue is locked. Please open new issues for comments as per the instructions at the top.

View discussions about this issue.

@senal Please can you stop unpinning this issue. Thanks.

@msomji Please don't upin this :)

@senal Please can you stop unpinning this issue. Thanks.

@iHiD , Sorry for the inconvenience. I can not recall I did unpin intentionally. :(

That's ok! :)

For every piece of feedback, or any discussions you would like to start, please open a separate issue in this repo using this issue template. This will avoid us having huge multi-stranded conversations on one issue. This issue itself is locked

@mikedamay Thanks for all this. Mind moving this into a new issue where we can discuss please?