If I made this mistake, I’m sure someone else has too.
I have a small website I maintain with a friend where we sell reusable static cling vinyl window snowflakes. I believe we produce the highest quality snowflake on the market, and we sell it on Etsy, eBay, Amazon, and on our own website at www.windowflakes.com.
For the website at windowflakes.com, I was showing it to some family members tonight when it came up with an SSL error. Something along the lines of “this certificate is not trusted”, etc, etc. Ahh! This is our busy season! They were using IE9, but still I figured maybe I had let the certificate expire or something else was going on. A little bit of research now that I’m home and I see that I didn’t install it properly.
Some Google searching and I ended up here:
You can type in your domain name and have it validate the certificate for you. I did that and it passed on all entries, except for the last. “None of the common names in the certificate match the name that was entered (www.windowflakes.com). You may receive an error when accessing this site in a web browser.”
Fortunately, they also provide links to the fix:
So I ended up here, since I got the certificate from StartSSL and use nginx:
And a few minutes later - fixed. All you need to do is merge the regular certificate file with the intermediary certificates you can download from the provider’s website. Ran it again, and all was fine.
Go double check your SSL certificates are installed correctly too!
- Macbook Pro Retina (purchased recently after I hosed the screen on my old Lenovo on a trip to DC)
- I also have a custom built PC desktop, running Windows 7 64bit as the base OS, but using Virtualbox to manage a couple custom Windows environments for clients, and to boot into Ubuntu.
- Sublime Package Manager
- Sublime Linter (with jshint installed via node.js)
- Visual Studio
- .NET MVC for server-side development
- SQL Server Compact Toolbox (for local integration testing using SQL CE)
- iTerm 2
- p4merge for diff / mergetool in git
- Personal (private) projects on bitbucket
- Public projects and a lot of the work I do for clients on Github
- Balsamiq Mockups for wireframes / IA diagrams
- Google Chrome (and related debuggers) for primary development - currently using the dev channel version for offline packaged-app API support.
- XAMPP / MAMP anytime I need an apache environment for some reason. Mostly when interacting with a WordPress site
- MongoHub (Mac Only) for a nice MongoDB gui
Plugins / Packages / Libraries
- backbone.js as my primary front-end framework
- express.js for node server-side MVC framework
- backbone-nested on a couple projects to handle nested backbone models a little easier.
- Mongoose for MongoDB support in node.js
- RequireJS for module amd support client side
- NUnit for .NET testing (using the built-in Resharper test runner)
- PetaPoco for a micro ORM in .NET to SQL Server
- Ninject for Dependency Injection
- Jade templating engine on the new stuff, both in backbone and in node. Underscore’s templating engine on my older projects.
I’m sure I’m missing stuff. That’s good for now though.
I’ve created a gist that has the files I used to get a basic test setup:
Here are the pieces to that gist:
- index.html - Just the basics to call require.js (which I call from the main application’s /libs/ folder, rather than keeping a separate copy in the tests folder). This then uses the data-main to call SpecRunner.js. In general I try to call all shared libraries from the application’s root, rather than copying to a test folder, so I don’t accidentally get library versions out of sync.
- SpecRunner.js - This is the core to setting up tests. You need to reference all the libraries, including using the shim functionality of RequireJS to load Backbone and Underscore (you have to do this in your main app as well). Note that this is also setup to use the chai-jquery plugin, and I like to use the BDD Should assertions. Also interesting here is that mocha is called in the dependency list, but isn’t named in the function arguments. This is done purposely to put all mocha variables into scope - “mocha”, “describe”, “it”, etc.
- app/models.js and model-tests.js - Just a simple unit test to prove all this works, testing that Backbone correctly assigns the urlRoot. Not necessarily a test you’d do in real life, but an easy hello world. Note that I’m using the node format here of calling dependencies, but the other way RequireJS allows where you pass in an array of dependencies works just fine as well.
Hope this helps!
I have several sites that I manage in one form or another, and a couple years ago I made a huge move from sending files over FTP to maintaining a remote repository on the server. Originally Mercurial - now Git. Still, same concept:
- Push to a central repository like Bitbucket or GitHub.
- Create a local clone on your production server in your /var/www directory or whatever
- When you make a new change, commit it locally….
- … push to GitHub…
- … ssh into your production server …
- Do a ‘git pull’ from GitHub
This is better than FTP, but still isn’t ideal. First, it puts either an .hg or .git folder in the root of your production code. But also, it comes with the added step of having to SSH into your server.
With all the wonderful things Heroku can do with their ‘git push heroku master’, I figured I’d go looking for a better solution. I found it here:
Here’s the core of the post:
- Make sure you can SSH into your server with your private key. Plenty of articles on how to do that. So in a terminal you should be able to type: ssh firstname.lastname@example.org and it’ll log you in without asking for a password.
- mkdir website.git && cd website.git
- git init —bare
- Create a new post-receive hook in git by creating a file in hooks/post-receive with this: https://gist.github.com/3421135
- Make sure it’s executable: chmod +x hooks/post-receive
- Then on your local repository:
git remote add web ssh://email@example.com/website.git
- And the key part:
git push production +master:refs/heads/master
Now, from your local git repository, you can do this:
git push production
And your changes automatically move to the live server!
Recently I’ve been working on a project, built in backbone.js, that is required to call a remote web service for its data. It also needs to work in Chrome, Firefox, and IE back to at least IE8.
I was sure this issue had been solved already. As it turns out, there are plenty of hacks out there to get it to work. There’s a great post on StackOverflow with some of your options:
Initially I implemented the flxhr option. Simple to get started - just drop some scripts on your page, check for jQuery.support.cors, and if the browser doesn’t support it, initialize the flxhr hack. This of course requires the user has Flash installed, but that’s not a huge deal. Ultimately I went looking for another option however because users were reporting some issues with things hanging in IE, and although I couldn’t pin it to the Flash proxy, it seemed possible it was causing it.
Instead of trying to implement another hack, I decided instead to try the native XDomainRequest support in Internet Explorer. This is what IE decided to use in IE8 and IE9 instead of XMLHttpRequest. I found this little script that overrides a function in jQuery to check for XDomainRequest support, and use that instead in those browsers.
Once again things were looking good. That is, until there was a server error and backbone showed a 404 not found error to the user instead of the proper error message. Looking at the code you’ll see on line 24 that if there’s an error in the ajax request, it has a hardcoded 404 error that it sends back to the jQuery callback. Well, surely I can fix that I thought. Maybe even patch it on Github. Unfortunately, as the Microsoft page on XDomainRequest says, “The document can respond to the error, but there is no way to determine the cause or nature of the error.” Sure enough, after plenty of testing, nothing I could do would give me a proper HTTP response from the server. Finally, the workaround:
I took the same xdr.js code, but modified it slightly. Unfortunately, I also had to modify the server-side code. What I do now is always return a 200 response to XDomainRequests from the server, but if there is an error, I send a JSON response back containing the true HTTP Status Code (400, 500, etc), and a Message field with the error. Lines 24-30 are what I changed to detect whether the response contains a JSON object with a StatusCode and Message field. If so, it overrides the actual status code and response sent back to jQuery.
This feels like a horrible hack, but the experience seems to have improved, I can continue using my REST routes, and we’re not relying on Flash anymore.