miker2049 / ghost-coding-challenge

Ghost Coding Challenge.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ghost-coding-challenge

Ghost Coding Challenge. Found here.

Steps, thought process

Will try to document my process.

Vanilla frontend with mock data

This seems like a good first thing to tackle. It will be much easier to build out the backend if there is a frontend in place. According to instructions, the MVP should be in vanilla js, which is quite nice, and in that case, just how simple can it be? Not sure if jQuery is cool or not so lets not use that, or at least try not to.

Mock comment data

Mock data will just be a big JSON file, but lets try to interface with it from the get-go like its an API. The MVP need not have nested comments/replies, but no sense in not, at least, setting ourselves up for them. Let us say right now that a comment will have replies as an array of comment IDs, and every comment will also have an “in-reply-to” part, just an ID, but if the id is -1, its a “root”, comment. Also our only user is someone named Chad.

[
    {"id": 0, "upvotes": 33, "text": "Lorem Ipsum", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 1, "upvotes": 0, "text": "Lorem Ipsum1", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 2, "upvotes": 0, "text": "Lorem Ipsum2", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 3, "upvotes": 0, "text": "Lorem Ipsum3", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 4, "upvotes": 7, "text": "Lorem Ipsum4", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 5, "upvotes": 0, "text": "Lorem Ipsum5", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 6, "upvotes": 9, "text": "Lorem Ipsum6", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 7, "upvotes": 0, "text": "Lorem Ipsum7", "username": "chad", "replies": [], "in-reply-to": -1},
    {"id": 8, "upvotes": 0, "text": "Lorem Ipsum8", "username": "chad", "replies": [], "in-reply-to": -1}
]

One thing to note is upvotes should probably be an array of ids, so people can know who upvoted them (unclear in the prompt if it is the case, but lets air on the side of featureful), also username should just be rowid. We must remember that.

A simple js script for a plain html page

Just how simple can it be?

function formatComment(data){
    return `${data.username} says: ${data.text} -- ${data.upvotes} 🖤`
}

//Takes comment data and returns a list element with its data
function renderComment(data){
    let el = document.createElement("li")
    el.innerHTML = formatComment(data)
    return el
}

// Takes an array of comment data and returns an ol element saturated with the comments
function renderAllComments(datas){
  
   document.querySelector("#comments-list").innerHTML = ''
   let list = document.createElement("ol")

    for(let i = 0; i<datas.length; i++){
       const thisEl = renderComment(datas[i])
        list.appendChild(thisEl)
    }
    document.querySelector("#comments-list").appendChild(list)
}

// async, returns a Promise with an array of comments
function fetchComments(){
   return fetch("mock-comments.json").then(res=>{
      return res.json()
   })
}

Submit comment

This is very simple indeed! But how can I add a submit button? The whole thing is tied to that async call to the mock data, which once done, can render the page. Let us not have the mock data dictate our strategy, the backend will need eventually to be made, and there everything will be discreet API calls. I think for now, a global variable in our script will suffice, to mock the “global” endpoint.

var DBGLOBAL = []

<<comment-view-utils>>

function insertComment(data){
    DBGLOBAL.push(data)
}

function render(){
    fetchComments().then(res=>{
        if(DBGLOBAL.length>0){
            res = res.concat(DBGLOBAL)
        }
        res = res.sort((a,b)=> a.id - b.id)
        renderAllComments(res)
    })
}

function submitComment(){
  let comment = document.querySelector('#comment-box')
  if(comment){
    console.log(comment.value)
    insertComment({
        id: 32,
        upvotes: 0,
        username: "chad",
        text: comment.value,
        replies: [],
        "in-reply-to": -1
    })
    render()
  }
}
render()

Very inefficiently rendering the whole thing when submitted! But.. lets just add the backend and see what happens from there.

mock Express backend

N.B. Will try to clock in and out from this point forward. The work so far took approximately 45min-1hr. There will be a lot of clocking in and out because working on this while I am on call at work.

For V1 our API can be reduced to: a POST comment, a GET comment(s), a PATCH like. One thing I am thinking about in general is how people usually “paginate” SQL queries. We could conceivably have like a lot of comments, so we shouldn’t just be like “SELECT * FROM comments” to every single client. We need to know the total amount of comments, and then the client can requests ranges on their end. This then entails one more API request: a GET comment count. Perhaps as well we need to explicitly separate GET a comment and GET comments plural.

const express = require('express')
const app = express();

const PORT = 3000

//quick mock controller, (I miss typescript!)
const mockController = {
    async getComment(n){
        return JSON.stringify({msg: `You are getting comment with id ${n}`})
    },
    async getComments(article, min, max){
        return JSON.stringify({msg: `You are getting comments for article ${article} with ids between ${min} and ${max}`})
    },
    async getCommentCount(article){
        return JSON.stringify({msg: `You are getting the amount of comments for article with id ${article}`})
    },
    async postComment(article, username, text){
        return JSON.stringify({msg: `${username} has posted a comment to article with id ${id}, inputting the text "${text}"`})
    },
    async likeComment(n){
        return JSON.stringify(`Comment with id ${n} is given a new like`)
    }
}

app.set('port', 3000);

app.get('/api/comment/:id', async (req,res, next)=>{
    try{
        res.json(await mockController.getComment(req.params.id));
    } catch (err) {
        next(err)
    }
})
app.get('/api/comments', async (req,res, next)=>{
    try{
        res.json(await mockController.getComments(req.query.article,
                                                  req.query.min,
                                                  req.query.max));
    } catch (err) {
        next(err)
    }
})
app.get('/api/comment/count', async (req,res, next)=>{
    try{
        res.json(await mockController.getCommentCount(req.query.article));
    } catch (err) {
        next(err)
    }
})
app.patch('/api/comment/like', async (req,res, next)=>{
    try{
        res.json(await mockController.likeComment(req.query.id));
    } catch (err) {
        next(err)
    }
})
app.post('/api/comment', async (req,res, next)=>{
    try{
        res.json(await mockController.getComment(req.query.article,
                                                req.query.username,
                                                req.body)); //body as just plaintext, is that ok?  Where should we sanitize?
    } catch (err) {
        next(err)
    }
})

// from old project
function onError(error) {
    if (error.name !== 'listen') {
        throw error;
    }

    const bind = typeof port === 'string'
        ? 'Pipe ' + port
        : 'Port ' + port;

    // handle specific listen errors with friendly messages
    switch (error.message) {
        case 'EACCES':
            console.error(bind + ' requires elevated privileges');
            process.exit(1);
            break;
        case 'EADDRINUSE':
            console.error(bind + ' is already in use');
            process.exit(1);
            break;
        default:
            throw error;
    }
}

app.listen(PORT);
app.on('error', onError);
// server.on('listening', onListening);

Ok, so with this, some simple curl tests show we are getting the expected output! Something I realized doing this is that we probably want to assume that, in fact, there is more than one article someone might comment on. So, the api considers the parameter/query of article id. It seems we are naturally pushed now towards creating our schema, which will be done with sqlite.

the database

Ok, so a there needs to be a database, what tables does it need on it? It will need a comments table of course.
CREATE TABLE IF NOT EXISTS comments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
-- likes INTEGER REFERENCES like_tallies(id),
user INTEGER REFERENCES users(id),
article INTEGER REFERENCES articles(id),
comment_text TEXT DEFAULT "",
in_reply_to INTEGER DEFAULT -1 REFERENCES comments(id)
);

N.B. We are setting ourselves up to not know who likes a comment. This isn’t quite specified one way or another, but I kinda feel it wont be too hard to add in the front later; it simply needs to be handled in the business logic part of it. We just need another table “likes” that relates a comment ID with a user ID.

CREATE TABLE IF NOT EXISTS likes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user INTEGER REFERENCES users(id),
comment INTEGER REFERENCES comments(id));

But now the problem is in order to get like tally, there needs to be a relatively expensive count query. Seems like a problem that triggers can solve, I think thats what its called… Lets see…

Yes this is what we want! If there is an insertion into the likes table, then another table called like_tallies will be updated with the new count. If some one changes their mind and unlikes a comment, that row in likes will be deleted, and again we update the count. It’s a simple memiozation at the database level.

CREATE TABLE IF NOT EXISTS like_tallies (
id INTEGER PRIMARY KEY AUTOINCREMENT,
comment_id INTEGER REFERENCES comments(id),
like_count INTEGER DEFAULT 0);

CREATE TRIGGER IF NOT EXISTS new_like_trigger
AFTER INSERT ON likes
BEGIN
UPDATE like_tallies SET like_count = (SELECT count(id) FROM likes where comment = NEW.comment) WHERE comment_id = NEW.comment;
END;

CREATE TRIGGER IF NOT EXISTS unlike_trigger
AFTER DELETE ON likes
BEGIN
UPDATE like_tallies SET like_count = (SELECT count(id) FROM likes where comment = NEW.comment) WHERE comment_id = NEW.comment;
END;

CREATE TRIGGER IF NOT EXISTS new_like_tally
AFTER INSERT ON comments
BEGIN
INSERT INTO like_tallies(comment_id, like_count) VALUES (NEW.id, 0);
END;

Ok, so this is looking good! Few more things to finish up:

CREATE TABLE IF NOT EXISTS articles (
id INTEGER PRIMARY KEY AUTOINCREMENT,
data TEXT );

CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT );


-- some default values
INSERT INTO users(username) VALUES ("DEFAULT_USER");
INSERT INTO articles(data) VALUES ("an interesting article, maybe someone will comment on it?");
INSERT INTO comments(id, user, article, comment_text) VALUES (-1, 1, 1, "the singleton root comment");

NB. Wasted time, there is a gotcha that WHERE statements can’t be inside triggers! Spent too much time there… RTFM.

But, this tangles into a functional init script for our db, and it seems like the triggers are working ok. One problem so far is that there is nothing stopping a user from liking something twice, as far as this logic is concerned. Seems like we just need the right UNIQUE placement, but also this should be checked a little farther outside too!

ANOTHER MISTAKE. I didn’t think about nesting and comments… Which might complicate our “article” part of the schema… We have already decided on the convention of an “in-reply-to” having a -1 number to imply it is not in reply to anyone. And in itself that means that if it is, in fact, in reply to someone it would get a a global id of the parent comment. What are the consequences of just adding that field? Maybe with a default value… I don’t think much, but it is some extra work on the controller/API side. That is fine. Will update it up there now. (Sqlite does allow a foreign key to be a local key!)

A db controller.

Moving right along here. Next up, something to connect the database to our API. A controller. Here is where I will have to diverge from proper production because I am choosing Sqlite as my db provider, so differences will start to emerge from this particular implementation versus one using MySql.

Will use the standard sqlite3 module, so native bindings.

const sqlite3 = require('sqlite3').verbose();
const db = new sqlite3.Database('./database.db'); // probably shouldn't hardcode this in the module.. but its MVP

module.exports = {
  async getComment(n){
    return await new Promise((res,rej)=>{
      db.get('SELECT * FROM comments WHERE id= $id', {$id: n},(err, row)=>{
        res(row)
      })
    })
  },
  async getComments(article, $min, $max){
    return await new Promise((res,rej)=>{
      db.get('SELECT * FROM comments WHERE id > $min AND id < $max LIMIT 100', {$min, $max},(err, rows)=>{
        res(rows)
      })
    })
  },
  async getCommentCount($article){
    return await new Promise((res,rej)=>{
      db.get('SELECT count(id) FROM comments WHERE article= $article', {$article},(err, row)=>{
        res(row)
      })
    })
  },
  async postComment($article, $username, $text, $replyTo){
    return await new Promise((res,rej)=>{
      db.run('INSERT INTO comments(user, article, body, in_reply_to) VALUES ($user, $article, $text, $replyTo)', {$article, $user, $text, $replyTo}, function (err){
        res(this.lastID)
      })
    })
  },
  async likeComment(commentID, userID){
    return await new Promise((res,rej)=>{
      db.run('INSERT INTO likes(user, comment) VALUES ($user, $comment)', {$user: userID, $comment: commentID}, function (err){
        res(this.lastID)
      })
    })
  }
}

This seems like it should all work… Couple of thoughts as I did this. I know we aren’t thinking about authentication/users, but if we were, who should know the userID? Where would we store it and would there need to be another method in the controller that gets the id from the name? I think its all outside the scope of this.. But I did leave the likeComment with param names that underline this point.

I imagine that they way things normally work is the user logs in and now they are in a session; this session gives necessary ids to the client. I’m sure its more complicated than just that but its probably something like that, and in general it makes sense to only query about a user ID once, and just keep that with the client.

Let’s see what happens when we plug this in. (N.B. Note that from here forward we will work with the files alone, to no get confused about the tangling. Especially for the express server part.)

pasting it all up together for V1

One thing I realize, with the addition of in-reply-to as well as a few other things, my min/max API query seems dumb. We can’t count on knowing how contiguous comments are across different articles (even, yes, knowing these are only hypothetical articles), it makes sense to use LIMIT and OFFSET instead… at least for MVP, I know they aren’t best practice in general. So we will make the necessary changes to the controller and the server, and now speak in terms of limit and offset.

About

Ghost Coding Challenge.

License:MIT License


Languages

Language:JavaScript 85.5%Language:HTML 14.5%