2005: At a bank dealing with mortgage secondary markets, we snuck it into the Research group (where "research" isn't R&D but feeding information to the trading floor) for plowing through dirty data from multiple databases before our DBAs could make it available via Oracle. This hack helped keep the math & stats PhDs’ pipelines full.
Rationale for using CL: because values have type rather than variables (but may be asserted with CHECK-TYPE; see Google CL style guide), this essentially gave us late-binding of schema.
2006: at Zillow, we were building autonomous server farm control at a time when AWS still emerging. Unfortunately, two things happened at the same time. 1) There was a re-org in Ops leading to discontinuing contractors, so both of us had a 2 weeks to finish it and finish early. 2) SBCL had just reached 1.0 after years in 0.x land. One modification in the run up to SBCL v1.0 that I overlooked in the changelog was how bindings for special vars were handled across threads. It required a trivial fix, but we didn't track that down before time ran out-- oops!
Rationale for using CL: 2 guys still learning the language (one also working on Masters) got lots of traction quickly.
2007: at an education administration start-up, same colleague as from Zillow and I began working on a conceptual sibling to Amazon Dynamo, roughly when their original white paper came out. In 5 weeks, we had an MVP; Kevin wrote all the code, I helped with design and code reviews; it handled arbitrary data payloads and migration of data "ownership" by node. We were about to begin multi-node load & capacity measurements when the business joined a Microsoft incubator and removed all Unix-only folks.
Rationale for using CL: 2 guys, lots of traction quickly... plus, late binding of schema via MOP tricks.
I would like to believe that parts of this continue as Kevin’s vivace-graph despite being a very different animal.
2008: at a Seattle-based ad network (I still bear the shame...), the geek-macho angle was that this is one of the few types of businesses likely to see billions of requests per day. In 4 months, we were deployed at RackSpace and completed baseline load & capacity measurements.
There was an issue with garbage-collection after Hunchentoot would release a HTTP connection, because by the time FreeBSD would release the underlying TCP/IP structs, several minutes would elapse before gc could reclaim everything. Not ideal, but it was manageable!
Rationale for using CL: 1 guy, lots of traction, successful hand-off to new hire (the guy who later brought Clojure to Amazon’s retail side of the house).
Bonus: local Angels and VC (Madrona) came to us but then came the economic crash of 2008...
2009: recruited into Memetrics, a former startup that had been acquired by big consulting firm, and the core software was already written in CL from nearly a decade earlier. I was only in the core group briefly, but subsequent work using CL was my default choice and already approved. (Then again, I didn’t ask permission or beg forgiveness.) There were more dirty data cleaning tools and a Recommender system from which China’s answer to Amazon became a client. Mind you, they didn’t use this code, as they had their own, but it demonstrated that our group understood the principles. I still count this as a win.
Rationale for selecting CL: was decided by folks in Sydney before I arrived. Their head-hunter found me after their acquisition.
2013: Splunk acquired BugSense. BugSense used "Erlang, Lisp and C" (Scheme R4) for handling billions of inbound requests per day non-stop from all time zones into our cluster. While not CL, having previously gone sufficiently deep into CL allowed me getting from zero to presenting at Erlang Factory conference in exactly one year while reaching more than 25x performance increase with the pure-Erlang rewrite.
Rationale for selecting Lisp by their founder: made for malleable query language.
2015: wife wanted to move back to Canada after several years in Silicon Valley. Resurrected a pet project circa 2007, snagz.net, and I attempted founding a company. Back-end is CL for Natural Language Processing work-flow. Core NLP things now use spaCy.io, because their NLP-fu is far better than my self-taught version. Some intermediate working data sets were pushed into a slightly modified Anarki fork of HN, partly to see what the Arc language was about, and partly because that tool fit the (internal) need.
Rationale for selecting CL: Lisp is a beautiful language within which to work.
Others in and around Vancouver using CL: Routific mentions Rust and Common Lisp in their job postings; D-Wave previously mentioned Lisp in job postings but unable to see it today; there was at least one other, but I can’t recall.
In my early experience with Common Lisp, I would have said MACROS were key.
Then, pre-populating memory followed by #’sb-ext:save-lisp-and-die was a handy trick for improving cold-start times. (Alternative to using POSIX mmap().)
Now, it’s simply the joy of writing code in the language.
First milestone: parens disappear, because using the right editor config means never typing a single paren, and editing involves manipulating whole expressions with fluid motion.
Second: thinking how a function should be or might be named, points to something either in the HyperSpec or be the name of a package, thus easy internet search. Today, there’s Quicklisp.org and Quickdocs.org, making this even easier.
That said, I’m currently learning Rust and React-Native for a tiny mobile app, as each language can teach you something new.
But something in the Lisp family will be with me for the rest of my days. I might go with Racket next just to expand my knowledge there.
So I recommend: use Common Lisp for a real project-- just because you can!