This is an interesting toy. We found it at a playground, so I don't know if it's mass produced or a one-off. Either way, it's clearly a screen shot of an iPhone 6s or 7, rendered in wood (I think with a laser). The detail is impressive. You can see the network mobile network (Verizon), day (January 29) and time (10:20 AM), Bluetooth connection, and battery level (99%). There's also a third party app installed (YouTube) so it was not a stock screenshot. The body of the phone is also nicely done. It's perfectly sized and the microphone, speakers, home button, and camera are outlined.
This is a detailed account of the career of a single Russian troll account. It shows a small sliver of how badly Americans were played by the Russian propaganda campaign.
@TEN_GOP was a heavyweight voice on the American far right. It had over 130,000 followers; it was retweeted by some of Trump’s aides. When it was suspended, in July 2017, voices across the American far right protested....
@TEN_GOP’s brief, but spectacular, career, shows how open America remains to foreign influence efforts. It was an anonymous account with no connection to the Republican party it linked itself to, yet it gained immense credence, first on the far right, and then in the main stream, entirely because of its partisan posts. Far-right commentators supported it; mainstream outlets and politicians quoted it; liberals attacked it; all were fooled by it.
Apple has made many great laptops, but the 15-inch Retina MacBook Pro (2012–2015) is the epitome of usefulness, elegance, practicality, and power for an overall package that still hasn’t been (and may never be) surpassed.
I waited and waited to upgrade my 2010 MacBook Pro, until it was on its last legs, because I was hoping for a nice speed bump. Instead, the TouchBar MacBook Pro came out, and I couldn't do it. I bought a 2015 MacBook Pro instead. It's a great computer. I hope it lasts me. My only regret is that I could have had it sooner.
I've been keeping track of the books I read each year for a while now. In 2011, I started picking out a few favorites as well. I can't go back and add my favorites from previous years, but I can look back from now and pick out the books that still stand out to me, years later. This is necessarily colored by the passage of time, but that's all part of the fun. I've decided to start with the most recent year and go backwards to the beginning. Here's my retroactive favorites from 2010.
For a while, I was on a positive psychology kick, and read probably half a dozen books about happiness research. This was probably my favorite. Haidt surveys ancient philosophies and religions and considers their wisdom in light of modern psychology research.
I have never read a history book like American Aurora before. It is composed almost entirely of period newspaper editorials, strung together with in character first person observations by the editor of the newspaper. Does that make it historical fiction? It's strange to read, but it works. The book centers around the debate between the Democratic-Republicans (heirs to the Anti-Federalists) and the ruling Federalists as the Federalist government comes close to war with Revolutionary France. An important lesson I learned from this book is that American politics has been more partisan than it is now (and not just during the Civil War). Rival militias marched, newspaper offices were torched, people were assaulted.
Mantel takes Thomas Cromwell, the villian of A Man For All Seasons, and makes him into a rich, three dimensional character who will do anything to make sure his (rather feckless) master Henry VIII gets what he wants -- while, meanwhile, promoting his own Protestant agenda.
This is a strange book. Written by one of the scientists who started the field of sleep medicine and did a lot of pioneering research into sleep and sleep disorders, it's half fascinating insight into sleep research (for example, he tells the the story of how he was gearing up for a big study on melatonin as a prescription sleep medicine...and then the FDA categorized it as a supplement, kicking off the largest uncontrolled sleep study in history) and half an old man's reminisces. Nevertheless, I have recommended it to others several times over the years. It also has what Dement says is a fool-proof cure for jet lag, which is hard to carry out because doctors are reluctant to prescribe sleeping pills.
Americans get the "good war" version of World War II, where heroic democracies fight back against overwhelming odds against evil fascists. Davies, a historian of Poland, sets out to correct that interpretation. In his telling, the war in Europe was primarily a contest between two totalitarian states, with the Western allies relegated to ineffectual aerial bombardment. He presents a strong case -- in terms of men and materiel, the Eastern front was the axis of the conflict. This version of history is not one that we in the West get much of, and it colors the outcome of the war significantly.
Google Vizier performs hyperparameter tuning. And makes cookies:
At the tail end of the paper we find something else very much of our times, and so very Google. If you ever find yourself in the Google cafeteria, try the cookies. The recipe for those cookies was optimised by Vizier, with trials sent to chefs to bake, and feedback on the trials obtained by Googler’s eating the results.
It has already proven to be a valuable platform for research and development, and we expect it will only grow more so as the area of black-box optimization grows in importance. Also, it designs excellent cookies, which is a very rare capability among computational systems.
If your engineering organization uses a staged release process, it might be implemented like this:
Development work happens on the master branch, and periodically, releases are created and tagged. In order to reach the production environment, a release must be promoted from Dev to QA to Production. This could mean that release 0.1 is running on production while release 0.2 is being QAed, and the developers just finished 0.3 and are starting to work on new features for the next release.
With this release model, when a bug is discovered in production, a hotfix might be necessary, because otherwise you'd be promoting the bugfix along with a bunch of unreleased changes straight to production. Instead, typically, a branch is created from the tag that is running on production and the bug is fixed there. This hotfix release can go through the same promotion process. The theory is that this will be smoother because there are fewer changes.
One challenge with this approach is getting your continuous integration system to build an artifact in order to make the hotfix deployable. By default on Travis CI, only master branch builds run the deploy stage. If you don't know the name of the branch to deploy ahead of time, you need to configure the deploy stage for all branches with a condition:
However, this doesn't work because of how Travis CI checks out your project to build. Travis CI clones your branch and then checks out the ref to build directly. This makes sense -- it guarantees that Travis builds exactly what triggered it -- but doesn't work with rev-parse.
Fortunately, you can check the $TRAVIS_BRANCH environment variable and use that instead. But be careful, because pull request builds (which represent the merge of the branch with master) also set $TRAVIS_BRANCH.
Here's a working solution for releasing master and branches containing hotfix:
# Deploy when this is Push build and the branch is either master or contains "hotfix"
# NOTE: The check for TRAVIS_PULL_REQUEST is necessary because for a pull request build
# TRAVIS_BRANCH will be the target of the pull request (typically master).
# See: https://docs.travis-ci.com/user/environment-variables/#Default-Environment-Variables
condition: $TRAVIS_PULL_REQUEST = "false" && $TRAVIS_BRANCH =~ ^(master|.*hotfix.*)$
If you use this process with Travis CI, hopefully this will save you some time. Time you can use to switch to continuous deployment instead of rolling releases. Sorry, sore subject.
Adrian Colyer has a cool way to help understand orders of magnitude: translate them into human terms. For example, to understand how long a second takes in computer time, upscale nanoseconds to seconds:
If we set up a ‘human time’ scale where one nanosecond is represented by one second, then we have:
1 nanosecond = 1 second
1 microsecond = 16.7 minutes
1 millisecond = 11.6 days
1 second = 31.7 years
That slow page render time starts looking different when you think about it taking 15 years...
He has similar analogies for latency and throughput.
I recently published a long tutorial about how and why to build your own query parser for your application's search feature. I wrote it because I've seen the pain of not doing it.
In the first application I worked on in my career, I added full-text search using Lucene. This was before Solr or Elasticsearch, so I learned a lot building my own indexing and search directly. When we started, we used the venerable Lucene QueryParser. However, we ran into problems -- mainly around user input. I looked at how QueryParser was implemented. It used JavaCC to parse user input and build the underlying Lucene query objects. Following the built-in parser as a model, I built a simpler one that we could safely expose to users. It was my first experience building a parser for real.
I always wanted to build a query parser at my last job, but there was never time. At my current job, we also ran into similar issues. So I decided to write a tutorial.
First, I wanted to write the code. I started reading about parser generators in Ruby (my strongest language right now) and quickly settled on Parslet because of its great documentation and easy-to-use API. Writing the code was surprisingly easy. I ran into hurdles along the way, of course, but I was able to get something working really quickly. Writing tests and building a query generation harness helped me find problems I missed earlier.
Writing the tutorial took longer. I had to do a lot of research into PEG parsers to avoid writing something incorrect. I sent drafts to former coworkers and got some great feedback that I incorporated into the tutorial.
Along with the code for the parser itself and the tutorial, I also wrote a mini publishing system. The tutorial is written in Markdown with a preprocessor to include snippets of code and SVGs (for the diagrams), then put it into an HTML layout. The Markdown is processed by CommonMarker (one of the nicest Markdown parsers) and the source code is colorized by Rouge. Writing this little publishing system reminded me how fun it can be to write software for yourself! It's janky, but it was fun to write and it gets the job done.
Haskell Prime is a collection of ideas for future versions of Haskell, including a proposal to generalize the numeric typeclasses by removing their semantic content, replacing Num and most of its subclasses with purely algebraic classes…
This proposal brings the straightforward clarity of Monad to arithmetic, an area where Haskell has long suffered from comprehensibility bordering on practicality.