lordpengwin / muzak

Amazon Echo Skill for Logitech/Squeezebox Media Server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Let's Regroup!

MikeDeSantis opened this issue · comments

commented

A few months ago there was a tremendous surge of code development on this project. Lots of forks were created and features developed. Several pull requests made it back into @lordpengwin 's trunk such as those by @heylookltsme. Others don't seem to have been incorporated such as the additions by @GeoffAtHome. @elanamir added Spotify integration and restructured the code to make future functionality easier to add.

I think it's time to regroup a bit and get some pull requests generated and incorporated. Is there any interest in this?

Definitely! I had received an Echo Dot for Christmas so I was really excited about it at the time and have since calmed down a bit, haha. But I'm definitely interested in continuing to work on the project. Count me in!

@MikeDeSantis @heylookltsme @lordpengwin Sounds good to me. Do we want to create a to do list with features that we all want? Personally I want to change the default music service to SqueezeBox but when I attempted to build the headless version of a Squeeze Player I failed. That said, the proposition is far simpler that a full Squeeze Play as Alexa would be the headless player and handle the actual content....

commented

One of the first steps should be the decision to move to the new architecture as put forth by @elanamir. This structure is much more modular and offers the framework for future intent and music service integrations. The downside is that some of the (fantastic) work in other branches that happened after the fork would need to be ported to that branch. I am in favor of the new architecture.

I just found this and I'm very interested in using it. I have LMS running in a docker container on my home lab and would very much like to control my two piCorePlayers with Alexa. This may be off topic, apologies if so, but I didn't want to start a new issue for it; it appears that you're regrouping and discussing ideas for improvements. I'm reluctant to open port 9000 to the world with basic auth; is there any way for Lambda to communicate with LMS through the Echo device without opening up ports? I wanted to add that as a possible suggestion.

commented

Hi, I'm a user from Switzerland and would love to see this coming to life in German.
I couldn't program my way out of a cardboard box, but I could offer Translation support English <> German...

Thanks :)

@epatch I don't know of any way to do this other that some sort of proxy. The skill uses the LMS api to directly talk to the server.

@rkm82 One way to at least partially do this is to move all the responses to a resource bundle file that consists of key/value pairs for each response. This file could then be loaded as a configuration option. It would be nice that anyone could customize and personalize the responses anyway they want including changing the language. On the other side, i guess you would need to have a language specific set of utterances that map to the correct intents.

Back in the game. I am looking to secure LMS with SSH before adding more functionality here.

Localisation can be done via default-assets.js. I think it is only the 'samples' parts that would need localisation. Happy to collaborate with someone on this.

Using SSH I can connect to my LMS in a secure manner. I need some that I can use on both AWS Lambda and local on my LMS. I have OpenSSH working and can access via an SSH tunnel my LMS remotely following the steps here: http://squeezeplayer.de/2011/05/3g-part-5-secure-your-communication-channel/

I was planning to use something like https://github.com/agebrock/inject-tunnel-ssh as this looks like it would do the trick.

What is the benefit of SSL?

The follow code allows an SSH tunnel to be created. I haven't tested this on Alexa yet but looks promising.
`
var fs = require('fs');
var _ = require('lodash');

tunnelconfig = {
    host: 'hosturl',
    password: 'host-password',
    username: 'host-username',
    keepAlive: true,
    dstPort: 9000, // Destination port
    dstAddr: 'destination-ip-address' // Destination IP address
};

var tunnel = require('tunnel-ssh');
var server = tunnel(tunnelconfig, function(error, server) {
    if (error) {
        console.log(error);
    }
});
// Use a listener to handle errors outside the callback 
server.on('error', function(err) {
    // console.error('Something bad happened:', err);
});

Once I have checked it working on Alexa I will check in the code.
I am also extending create-assets to correctly create slots for play lists.

Working on Alexa. I had to up the amount of time in the lambda function from 3 seconds to 8 seconds. I didn't expect the additional time to open the ssh-tunnel would be significant. I think I might be able to get away with 5 seconds but this will be by trial and error.

Hurrah. Playing a title (track) now works. Every better is that Alexa developer console accepts "speechAssets.json" in the interaction model builder. What I was forgetting to do was to rebuild the model.

Major refactor complete.

  • Player name persisted across sessions.
  • ssh-tunnel used to communicate with server.
  • interaction model built (speech assets) that includes playlists, albums, artists and tracks.

Now to add some new functionality.

What should the next goal be?

commented

Well done @GeoffAtHome! I will be updating my Alexa skill with this version today, I'll report back any issues that I find. One question, in file config.js-sample, shouldn't there be a placeholder variable:
config.alexaAppID = "amzn1.ask.skill.xxxxxxxxxxxxxxx"