If you're a Facebook friend of mine, over the last few weeks you'll have
seen status updates regarding an online quiz app that I've been working
on. I started off calling it an online testing system, but decided
against that as it had too many connotations of an app that allowed unit
tests to be executed!
My original idea for the app was to use it to determine individual's
readiness for Salesforce certification exams and crowdsource the questions
as a free for all. However, I'm rethinking how the questions will be
authored based on the various compromises of real questions that have
taken place this year. I'm still planning to allow the community
access to author questions, but in a more controlled fashion and limiting
the number of questions an individual can write on a single topic.
A screenshot of a question is shown below :
One of the key aspects for me is the ability for candidates to give a
percentage rating to their confidence in the answer. This is based
on my approach to taking the real exams, as it allows me to determine how
close I think I am to the pass mark. I find it particularly useful
when assessing a candidate's readiness, as if they are getting questions
wrong when they were 100% confident in their answer, that implies there is
a fundamental lack of understanding in that particular concept.
There's also a couple of free text areas for candidate notes and
feedback on each question.
Each question is marked with a topic, which identifies which test it
belongs to, and an area which is simply a free text sub-topic - Workflow
is an area under the Force.com topic for example. The areas are used to
provide feedback to candidates about areas they are weak on - at the
moment its simply the first five unique areas encountered when marking.
I do plan to make this more sophisticated, taking into account the
candidate's confidence and the number of incorrect answer in each area,
for example
Something that applies both to tests and surveys (and many other question and answer scenarios) is the requirement to decouple the questions from those presented in a test instance. For example, if a questions is worded incorrectly and I fix it, I don't want to retrospectively update that question as posed to a particular candidate, as its not the question they answered. The trick here is to have the concept of questions as templates, and clone these when creating a test instance. Mine are called question responses, and that is where things like the feedback live.
Questions can be single answer (radio button), multi-answer (checkboxes),
putting answers in order (picklists) or free text (long text area). The
first three are automatically marked when the test is submitted, while
free text requires a human to review the text.
As I want to be able to have (at least) users and contacts as test
candidates, I couldn't use master-detail relationships and standard roll
up summary fields. Instead I used this
excellent utility
from Anthony Victorio that allows roll up summary behaviour via regular
lookup fields. It only takes a few minutes to set up and has worked fine
for me thus far. The only trick is to remember to "touch" all the child
objects when you add a new parent summary field.
The app is built on Force.com sites using composition templates.
The design is from
Free CSS Templates - a great
resource if, like me, design isn't your strong suit.
At the moment there's just a single test on Dreamforce X to allow me to
test the site out in a controlled fashion. So give it a go and let
me know what you think. If you would be interested in writing or
marking questions, please tick the boxes on the test signup form. On
the results page there's a link to tweet out the score - don't feel any
pressure to do this, but if you do want to that would be cool.
You can access the site at:
http://tests.bobbuzzard.org
hi. the post is really nice to read . do keep sharing more such updates.
ReplyDeletefun quizzes
Great blog, thanks for sharing
ReplyDeleteOnline Quizzes