Everybody hates rejection, and everybody fears rejection(Unless you’re a salesman, of course).
The idea of someone telling us that we’re not good, we’re not a fit, we’re not loved anymore, it’s against the human condition need of being liked.
Have you ever been paralyzed before asking out that girl you like? Even knowing you have nothing to lose by doing it?
Interviewing for a job is no different, when someone applies for a position you have opened is dealing with the fear of rejection,
and your responsibility is to acknowledge that fact, and treat people who want to work with you as a human being.
Does it mean you need to personally thank every applicant sending an email with his resume?
Not exactly.
Respect
You need to show respect, If you will remember only one thing, remember that.
Being in the position of the interviewer doesn’t make you better, and if the people you’re hiring is any good,
you should be better showing your brightest side during the process, no one likes to work with assholes.
Having failed myself many times -always with a reasonable excuse, of course- I made some easy rules to follow when interviewing people.
It’s not about the stage
The interviewing process in your company may have many stages, technical interviews and whatnot.
But in my book about respect, there are only two things that matters when interviewing someone:
How much time the applicant has spent
Have you ever meet her face to face
That’s all, your respect rules must derive from that.
Dealing with time
What to do when you’re rejecting someone, according to the time he has spent?
He sent an off-the-shelf resume, no personal letter
No need to respond
She applied with a personal letter, took the time to understand what your company does, and why she’s a fit
Thank you note, you’re not exactly what we’re looking for
He spent 3 hours in a technical interview and failed
Note thanking for the time spent, if it explains why the applicant failed, much better. People always want to know the reasons of rejection.
Applicant passed the technical interview, but fails later on the process
Same as above
She failed on the later stages, and you have seen her face in an interview
You should make a phone call and personally thank for the time spent
Easy ain’t it?
It’s about other people’s time, and it’s about human relationships.
If you don’t have the balls to reject someone like a human being, you shouldn’t be interviewing at all.
I’ve been looking at Phonegap since it started, long before Adobe bought it in a desperate attempt to destroy it.
Up until now, I never got the chance to make something with it, but always had the doubt how good or bad the applications you could create were.
Last week I had an opportunity to take a look under the hood, and made the source code available on GitHub.
The big question was, can you make native apps using Phonegap?
What you mean by native?
According to this guy in order to make native IOS applications you need to program in Objective-C.
I say that’s misleading.
Native applications are applications that run on the phone and provide a native experience, for which I understand at least the following treats:
Can access all of the device API, address book, camera, etc.
Access to local storage
Zero latency feedback
Interoperability with other phone applications
UX should respect the device culture and guidelines
,
If you have those, why should you care about the language the app is written in?
The wrong reason to use Phonegap
At first sight you may think that the main reason to use Phonegap is program once, run everywhere.
Wrong.
In order to provide a native experience you will need to design the UX of your app for every platform you’re targeting, so at least the UX/UI code will be different.
Obviously you can use the same UI for all the platforms, but unless the purpose of your app is to alienate your users I wouldn’t try it.
Software should behave as the user expects it to behave, you would not create new affordances for the sake of creativity, don’t do it for the sake of saving money,
because it ain’t cheaper in the long run.
So, no matter what you’re thinking about doing, save some time to read the UX/UI guidelines for each mobile platform you’re targeting.
The great Mike Lee would tell you that you even need a different team for each of those platforms.
WTF is Phonegap?
You know the tagline “easily create apps using web technology you love”, does it mean the only thing you need to know is HTML and Javascript?
Of course not.
Phonegap is an extensible framework for creating web applications, with the following properties:
The framework exposes an API to access the device different components
Accelerometer
Camera
Compass
Contacts
Etc.
The API is the same for the different supported platforms
IOS 4.2+
Android 2.1+
Blackberry OS6+
You must code your program using HTML and Javascript.
You can think of it as a native host that lets you write your application in Javascript, abstracting the native layer components behind a uniform API. That’s all.
So you’ll end up creating your app inside XCode, dealing with code signing nightmares and taking a lot of walks inside the Apple walled gardens.
And you will need to learn the Phonegap API.
It doesn’t have to be 100% HTML
The first reaction is to think that since Phonegap uses a webview you will have to create your application using only HTML, but it’s not the case.
Phonegap supports plugins, which is a mechanism for exposing different native features not already exposed in Phonegap API.
So you can create more native plugins and expose them to javascript, where javascript works as the glue that blends
together the native controls but not necessarily is used to create all the UI.
The most common example is the TabBar and NavigationBar in IOS, plugins for those already exist, and lets you design a more native experience than the one
you would get using only HTML.
Notice the Back button in there, it’s just a custom button with the back text. If you want to create an arrow like back button you’ll need to go
down the same path as if you were doing Objective-C IOS development.
Among the most well known are jQuery Mobile and Sencha Touch. JQM development being more web like, something to consider if your team is already
comfortable with HTML, Sencha generates its own DOM based on Javascript objects 100% programatically.
I haven’t dig deep enough in order to write a complete evaluation, you may found some interesting ones here, here and here.
Almost everybody agrees in one important point:
JQM is sluggish and transitions doesn’t feel native enough, something I easily verified testing the app in my IPad I, even the slider was sluggish.
Using Phonegap Plugins
Plugins are usually composed of two parts:
The native bridge between Objective-C and Javascript
The Javascript exposing the plugin
Usually you’ll need to copy the m and h plugin files to the Plugins directory of your project, you will also need to declare the plugins
being used in the config.xml project file.
plugins.navigationBar.init();plugins.tabBar.init();plugins.navigationBar.create();plugins.tabBar.create()plugins.navigationBar.setTitle("Navigation Bar")plugins.navigationBar.showLeftButton()plugins.navigationBar.showRightButton()plugins.navigationBar.setupLeftButton("Back","",function(){$(window).unbind("scrollstop");history.back();returnfalse;});plugins.navigationBar.setupRightButton("Alert","barButton:Bookmarks",function(){alert("right nav button tapped")});plugins.navigationBar.show()plugins.tabBar.createItem("contacts","","tabButton:Contacts",{onSelect:app.loadNews});plugins.tabBar.createItem("recents","","tabButton:Recents")plugins.tabBar.createItem("another","Branches","www/images/map-marker.png",{onSelect:function(){app.loadMap();}});plugins.tabBar.show()plugins.tabBar.showItems("contacts","recents","another")
Using this strategy lets you extend your app for a more native experience exposing to javascript even custom controls you may design.
This way you can have some members of your team focused on the native code of the app and exposing the building blocks to the web developers assembling the pieces.
As you see in the last image, the TabBar is shown in the native version and the HTML version side-by-side. The HTML version was created using jQuery Mobile.
Debugging is hell
Well maybe it’s not hell, but it’s not a pleasant experience either.
If you include the following line in your html using your own id:
You’ll have easy access to debug your app using weinre without the need to set it up, at least it’s good for HTML inspection.
If you want to debug javascript, you’ll certainly end up using alert and console.log, even the guys at Phonegap are recommending the poor’s man debugger.
Be ready to waste some of the time you gained by choosing Javascript doing print based debugging.
Tools are picked for the team, so that’s what you should think about when choosing or not to pursue the Phonegap path. If you already have members on your team
who are great at web development, Phonegap may be an option, certainly it’s fast for prototyping and seems to be a great asset for product discovery and validation.
If charging for your app is among your goals, I wouldn’t pick Phonegap or any other framework that uses the webview renderer as the main application.
Also, for most tasks the Javascript VM would be alright, but if you have inner loops cpu intensive, such as in game development, using Phonegap it’s not really an option.
Reviewing the main points considered to categorize a mobile application as native, web frameworks will provide you a sub-par experience regarding feedback,
latency and usability. Using the Phonegap plugins to avoid it will only go so far before the cost being so high you’ll better be programming in Java or Objective-C anyway.
If you still have doubts, fork the code and give yourself a try.
There are some really nicealternatives out there if you want your application to be able to make a call or send a SMS.
But the truth is sometimes you don’t want to rely on the cloud for your latency-sensitive communications, you already have communications infrastructure
you want to reuse, or you have such a volume of calls to make that it’s cheaper for you to roll your own solution.
So I will show you a DIY guide to roll your own dialer using Clojure and Asterisk, the self proclaimed PBX & Telephony Toolkit.
What is a Dialer
If you ever received a spam call from someone trying to sell you something, it was probably made by an automated dialer.
The purpose is to reach the most possible people in the least time, optimizing resources.
Sometimes it’s someone selling Viagra, but hopefully it’s used for higher purposes such as massive notification of upcoming emergencies.
Integrating with Asterisk
Asterisk has a lot if integration alternatives, custom dial-plans, AGI scripting, outgoing call spooling, or you can write your own low-level C module,
each strategy serves its purpose.
For this scenario I’ve decided to show you an integration with Asterisk using the Asterisk Manager API,
which allows for remote command execution and event-handling.
I’ve written a binding for Clojure called clj-asterisk to sit on top of the low-level text based protocol.
Making a Call
The clj-asterisk binding map against the Asterisk API is straightforward, so checking against the Originate Action which is the
one we need to create an outgoing call.
The ActionID attribute is not specified since it’s internally handled by the clj-asterisk library in order to track async responses from Asterisk.
Receiving events
For most telephony related actions blocking is not desirable, since most of the time the PBX is handling a conversation and waiting for
something to happen, using a blocking scheme is far from the best. You need a strategy to wait for events that tell you when something
you may be interested in, happens.
In this case we will be interested in the Hangup event in order to know when the call has ended, so the dialing port is free,
so we can issue a new call. If you’re interested in the complete list of events, it’s available on the Asterisk Wiki
To receive an event using clj-asterisk you only need to declare the method with the event name you need to handle:
The method passes as parameter the received event and the connection context where the event happened.
The Main Loop
In order to have a proper dialer you will need a main-loop, which life-fulfillment-purpose is:
Decide on which contacts are to be called
How many ports are free so how many I can dial now
Handle retrying and error rules
Dispatching the calls
I’m assuming you have some data storage to retrieve the contacts to be dialed and will share those details in a later post,
I will focus now only in the dialing strategy.
123456789101112131415161718192021222324
(defn process"Loops until all contacts for a notification are reached or finally cancelled"[notificationcontext](let [total-ports(get-available-portsnotification)contact-list(model/expand-rcptnotification)](loop [remainingcontact-listpending-contacts[]](or (seq remaining)(seq pending-contacts))(do(let [pending(filter (comp not realized?)pending-contacts)finished(filter realized?pending-contacts)failed(filter(fn [r](not (contains? #{"CONNECTED""CANCELLED"}(:status@r))))finished)free-ports(- total-ports(count pending))contacts(take free-portsremaining)dialing(dispatch-callscontextnotificationcontacts)](println (format"Pending %s Finished %s Failed %s Free Ports %s Dispatched %s"(count pending)(count finished)(count failed)free-ports(count dialing)))(Thread/sleep100)(recur(concat (drop free-portsremaining)(map :contactfailed))(concat pendingdialing)))))))
Lets go piece by piece…
You wanna know how many ports are available to dial, for instance you may have only 10 outgoing lines to be used.
1
total-ports(get-available-portsnotification)
You wanna know the recipients to be reached.
1
contact-list(model/expand-rcptnotification)
Then you wanna know the status of the contacts you’re already dialing and waiting for an answer or for the call to finish.
123456
let [pending(filter (comp not realized?)pending-contacts)finished(filter realized?pending-contacts)failed(filter(fn [r](not (contains? #{"CONNECTED""CANCELLED"}(:status@r))))finished)free-ports(- total-ports(count pending))
Here pending-contacts is a list of futures, the contacts being currently dialed. Since we don’t wanna block waiting for the answer the realized?
function is used in order to count how many of them are finished and filter them. If the finish status is not CONNECTED or CANCELLED
we assume the contact has failed and we need to issue a retry for those, typically the BUSY and NO ANSWER statuses.
Then, given the total available ports minus the already being dialed contacts, a new batch of contacts is dialed
The dispatch-calls function is pretty straightforward, it just async calls each contact of the list.
1234
(defn dispatch-calls"Returns the list of futures of each call thread (one p/contact)"[contextnotificationcontacts](map #(future(callcontextnotification%))contacts))
Finally the call function issues the request against the Asterisk PBX and saves the result for further tracking or analytics.
1234567891011121314151617181920212223
(defn call"Call a contact and wait till the call ends. Function returns the hangup event or nil if timedout"[contextnotificationcontact](manager/with-connectioncontext(let [trunk(model/get-trunknotification)call-id(.toString(java.util.UUID/randomUUID))prom(manager/set-user-data!call-id(promise))response(manager/action:Originate{:Channel(format"%s/%s/%s"(:technologytrunk)(:numbertrunk)(:addresscontact)):Context(:contexttrunk):Exten(:extensiontrunk):Priority(:prioritytrunk):Timeout60000:CallerID(:calleridtrunk):Variables[(format"MESSAGE=%s"(:messagenotification))(format"CALLID=%s"call-id)]})](model/save-resultnotificationcontact(deref prom200000{:error::timeout})))))
The tricky part here is that it’s impossible to know before-hand the call-id Asterisk is going to use for our newly created call,
so we need a way to mark our call and relate to it later when an event is received, we do that using the call variable CALLID
which is a guid created for each new call.
Our call creating function will wait on a promise until the call ends, something we will deliver in the Hangup event as shown here:
;; Signal the end of the call to the waiting promise in order to;; release the channel(defmethod events/handle-event"Hangup"[eventcontext](println event)(manager/with-connectioncontext(let [unique-id(:Uniqueidevent)call-id(manager/get-user-dataunique-id)prom(manager/get-user-datacall-id)](println (format"Hanging up call %s with unique id %s"call-idunique-id))(deliverpromevent)(manager/remove-user-data!call-id);;FIX: this should be done;;on the waiting side or promise may get lost(manager/remove-user-data!unique-id))));; When CALLID is set, relate it to the call unique-id;; to be used later in hangup detection;;;; The context has the following info inside:;; callid => promise;; Uniqueid => callid;;;; so it's possible to deliver a response to someone waiting;; on the callid promise(defmethod events/handle-event"VarSet"[eventcontext](when (= (:Variableevent)"CALLID")(manager/with-connectioncontext(println (format"Setting data %s match %s"(:Uniqueidevent)(:Valueevent)))(manager/set-user-data!(:Uniqueidevent)(:Valueevent)))))
It seems more convoluted than what it actually is, when the CALLID variable is set we receive an event that allows the mapping between call-id and
Asterisk UniqueId to be done. Then when the Hangup occurs we can find the promise to be delivered and let the call function happily end.
Keep tuned for part II, when I will publish the data model and the complete running Dialer.
Here is the gist with the code of the current post.
The coxcomb chart was first used by Florence Nightingale to persuade Queen Victoria about improving
conditions in the military hospitals during the Crimean War.
As you see it serves the same purpose as a traditional barchart, but displays the information in
a coxcomb flower pattern.
I couldn’t find something already done that suited my needs, so I made one my self.
It’s slightly modified from the original design, since it doesn’t display the bars stacked but side by side, I think it’s better
to display superposed labels that way.
I’ve used it to show some skills in a resume-of-sorts if you wanna see a color strategy by category and not by series.
Lie Factor Warning
The received values are normalized and the maximum value takes the complete radius of the coxcomb. Be warned,
each value is normalized and only the radius is affected, not the complete area of the disc sector.
This may introduce visualization problems as the ones pointed by Edward Tufte, with x10 lie factors or more, as in the following known case with a 9.4
lie factor.
I may fix it if someone founds this useful, the area for the formulas are on this website. The source code is on github.
I just watched the Strangeloop talk titled Types vs. Tests: An Epic Battle from Amanda Laucher and Paul Snively.
As Amanda says it’s a discussion many of us have had in the past, I used to talk about it with fedesilva,
hardcore Scala advocate, you know what I mean?
For me, I think types and tests are two sides of the same coin, because neither strategy
for proving correctness is computable.
Bear with me.
The purpose of having types or having tests, it’s to prove your program to be correct before your bugs reach your customers,
one strategy tries to prove it at compile time and the other after compile time. But what it really means your program to be correct?
According to the definition a function f is computable if a program P exists which can calculate the function given unlimited amounts of time and storage.
f:N→N is computable ↔ ∃ a program P which computes the function.
Also we must define when a program P converges for input n.
Program P with input n converges if ∃ m ∈ N / <Q,n> = m , it's written <Q,n>↓
In computer theory there are well known functions to be non computable, two of them are:
Θ(n) = 1 if <Ix(n), n>↓
0 if <Ix(n), n>↑
Which says that the function Θ is equal to 1 if the program of index n converges on input n and 0 if the program of index n diverges on input n.
There’s another very famous function which has been proved to be non computable.
stop(p, n) = 1 if <Ix(p), n>↓
0 if <Ix(p), n>↑
Which pretty much says that given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever.
As you may have guessed, it’s the well known Halting Problem, and it’s not computable.
How the Halting Problem relates to types and tests
Let’s assume our program P we’re trying to prove correct, computes the function f:N→N. We can define our program that proves correctness, a program T
that computes the function
ft:N→{0,1}
Which is to say, for every input of the domain, our program T decides if the program converges or not. It’s starting to sound familiar ain’t it?
Let’s assume such a program T exists to prove correctness, and we have a macro MT to find such a program given P. We could write the following program Q
What Q does it’s having received a program x0 and an input x1, first finds the T decider program for the program x0, and then evaluates the program
with the input x1.
So what do we have here?
Q(p, n) = 1 ↔ ft(n) = 1 ↔ <P, n> ↓
So we have written a program which computes the stop function, which is absurd. It means we cannot have a program that decides on the computability of
a program.
Show me the code
In practice, it means that if you have this program
123456
intf(intx){while(x>0){x--;}return0;}
This program doesn’t stop for x < 0, and according to theory, there’s no program you can write to find out about it.
There are also a few other funny cases regarding the domain of your functions, such as
123
intf(intx){return1/(x-3);}
This function fails miserably if x = 3. Just think about it when your functions have a more complex domain.
How to improve your tests for correctness
Most people I see are worried about having 100% code coverage, but it’s not that usual to see people worried about data coverage.
As seen in the previous example if you forget to test for x = 3 you may have 100% code coverage but your program will blow up anyway.
Regarding types, I know Dependent Types exists, but it’s the other side of the same coin, you have to provide a constructive
proof that the type is inhabited. So if you don’t define your type considering the special cases of your function domain, no one is coming
up to save your ass.
But when thinking about correctness you should be thinking about your function domain.
Conclusion
Both tests and types are useful ways to validate your program is correct, but not perfect. Even the discussion is meaningless, because it’s just a matter
of taste whether you like to specify your correctness rules in types or tests, but it’s something you will keep doing as far as I can tell.
As Rich Hickey said, both tests and types are like guard rails, and you must know the cliff is there in order to decide building them.
Update:
Many people wrote to me as if I’m saying you can’t prove a program to be correct, that was not what I’ve tried to say.
It was that you can’t have a system that can prove programs to be correct without specifying the rules yourself.
That is, it was a case against Q not against T.
It’s almost amazing that being the year 2012, on the break of Mayan Apocalypse, and there’s still some people pushing code out the door without stopping for a minute to think how much a bug costs.
I’ll save you the thinking, it’s costing you customers.
See the following chart I’ve crafted for you(emphasis on crafted), please hit the play button.
There’s an obvious relationship between the cost of fixing a bug and how much customers your company can effectively take.
It has an easy explanation, if you have only one customer and your solution has a bug, what do you do?
You call her, you explain the bug, you go to her office, you hack a fix, you drink some coffee, and you move on.
Maybe if you have one big fat customer based on a personal relationship, you can live with that.
I hope it’s clear for you this delivery process does not scale, is it?
When you have hundreds or thousands of customers you can’t clone yourself and explain to everyone why your product is failing,
you won’t drink a hundred coffees to build rapport and talk your way out of the mess.
I think there’s still two big misconceptions about this relationship between your bugs and your customers,
and it may affect how you decide on your development and delivery process.
Bug fixing cost and quality are not the same thing
It’s widely known, I hope, that the earlier you find a bug, the cheaper it is to fix it. This guy even fixes his bugs in the hammock, before writing any code.
Take a look at the chart of the relative cost of fixing defects, this is the source
Obviously you should be investing in good Engineering, peer reviewing even your documents and designs, and testing your components early an often.
(Quality is not about testing either, but that’s material for another rant). What’s not so clear, is given that some bugs will always reach your customers,
how do you reduce the cost of fixing your on-the-wild bugs?
You should do everything in your reach to produce quality products, because it’s cheaper in the long run.
But what will make or break your ability to grow your customer base, is how fast and cheap you move when a bug is found.
Your maintenance cost, if you want.
Bug fixing cost is like performance in the browser
You should watch this talk from the last Strangeloop, besides being great, Lars Bak makes a great point about performance in the browser,
when a new level of performance was reached on the Javascript VM, all new kinds of applications started to pop up taking advantage of that performance.
Speed in the browser did not improve because Gmail was running too slow, first speed improved, then we have Gmail.
It’s the same with your customers.
If you wait till having lots of customers to start thinking about improving your maintenance costs, you will never have them.
Having low support and maintenance costs will make you find a way to acquire more customers, just because you can.
What to do?
This is not by any means a complete nor bulletproof list, but some strategies I’ve found from personal experience that help.
Have you ever been involved in a delivery process having to test thousands of test cases, run dozens of performance and stress tests,
do it in multiple platforms, all of it, just because you patched 3 lines of code, and you must be absolutely sure everything is still working as intended?
I have, and it’s not fun
It’s not fun for your customer either, because you end up batching even your hot-fixes, and they’re not so hot anymore.
And your customer has to wait, and you will eventually lose your customer.
Continuous integration is not about some geeks with shiny tools, it’s about customers.
You develop with operations in mind
There’s a great talk by Theo Schlossnagle about what it means to have a career in web operations, walking the path, and becoming a craftsman, you must watch it, seriously, because it’s that good.
One of the remarkable points is that you must build systems that are observable. Developers cannot separate themselves from the fact that software has to operate,
actually run. And developers shouldn’t be trying to reproduce a bug in a controlled environment in order to understand if there’s really a bug.
You should be able to diagnose the problem in the running system, so it must be observable. How much elements in that queue? is it stalled? you must know, now.
And you don’t build observable systems if you start thinking about it after you’ve shipped, using an entirely different team(hello DevOps).
Software with operations in mind is like software with security in mind, or quality in mind, it’s a state of being, and it’s about your development process.
You use the right tools
How long does it take you to see that a function is returning the wrong value?
How long does it take you to find the 3 lines of log that point you to the exact spot the problem is?
How long does it take you to analyze a crash dump and get to the cause of the crash?
Being able to debug and diagnose a problem fast, is almost as important as being able to fix it fast, and deploying the fix fast.
This is an area where I personally think there’s a lot of room for improvement regarding the tools we daily use,
but you should know DTrace exists and how to use it, idealistically.
Conclusion
If you’re hacking your brains out and life’s good, all the power to you. I like that too.
But if you’re really thinking about scaling your business, you should be taking a look at your bug fixing and maintenance costs, now.
There’s also a great book about scaling companies, you should read that one too.
When you’ve been leading a team for a while, you kinda get used to get them together and break some news, whether it’s good or bad. People also get used to listening, and if you’ve done well, trusting you.
But there’s some news you will never be ready to break, it’s the day you must say you’re stepping down.
I know I wasn’t.
It was one of the more difficult and saddest moments I’ve ever had to go through, I’m still finding hard to even write about it.
Beyond all reasons, there’s only one thing I really want to say to Ewe, Niko, Burguer, Seba, Fernickk, Nacho, Juan, Cany, Paola, El Ruso Alexis, Fede, Canario, Lolo & Diego.
I’m not a StackOverflow active contributor, something I recently decided should start to change.
I think it’s amazing the speed an answer is given for any asked question, like freaking fast. If you are using Google Reader to peek new questions filtered by tag, when you see a question, almost for sure it’s already answered.
Fortunately all StackExchange data is open, so we can see exactly how fast is that. I used the online data browser, more than enough for the task.
I decided to consider only the questions having an accepted answer, since questions with many bogus answers should not be treated as having an answer at all.
tl;dr
The average answer time seems to be dependent on a mix of the maturity of the language and how many people is using it.
Hey, Haskell has pretty good answer times, at least considering its 33th position in the TIOBE Index.
Not all questions are the same
Of course not all questions are the same, this is from the first query I ran.
This is an unfiltered query using all the questions from year 2012, you see the average answer time is much higher than the previous chart, around 1000 minutes, looking at the data:
Language Ans. Time Stdev
c 934 7630.98957971267
c++ 1036 7258.13498426685
clojure 1078 7485.94721484444
haskell 1199 9059.91937846459
php 1210 8588.58929278208
lua 1386 6569.08356022594
c# 1452 8875.00837073432
scala 1472 10707.9191188056
javascript 1490 9756.64151519177
java 1755 10541.6111024572
ruby 2124 11850.4353701107
The standard deviation is huge, we have a lot of questions that took ages to get answered, making the average answer time meaningless.
So I decided to take out questions with an answer time greater than 24 hours, as 92% of the questions have an approved answer in less than 5 hours.
(here you can see the query used to get this table)
DifficultyGroup Total Average StandardDev
Easy 47099 27 44.7263057563449
Medium 344 691 339.312936469053
Hard 1926 3769 2004.75027395979
Hell 1623 66865 96822.8840748525
You see there, PHP running at front with 68 minutes average accepted answer time, either it’s too easy or there’re too many of them.
If you wanna see how the distribution goes when considering accepted answers in less than 5 hours, is the first picture of the page, the trend is also there.
What about the time?
Something unexpected, average answer time is almost unaffected by the time of the day the question was asked.
The only thing I see here is that Ruby programmers are being killed by the lunch break and c++ programmers slowly fade out with the day, ain’t it?
There goes my idea of catching unanswered questions at night. It would be interesting to see how many cross-timezone answering is happening.
Conclusion
It should work better running a regression against the complete dataset using more features than only programming language and time of day
to automatically guess which questions have more chance of have a long life unanswered. Maybe next time.
Last week started the course on Computational Investing from Coursera and I’ve been taking a look.
What caught my attention is the libraries used for portfolio construction and management, QSTK, an opensource python framework, based on numpy, scipy, matplotlib, pandas, etc.
Looking at the first tutorial’s source code, saw it as an opportunity to migrate the tutorials and libraries to Clojure and get to play a little with Incanter.
I’m going to highlight what I’ve found interesting when migrating the tutorials. I’m assuming you have QSTK installed and the QS environment variable is set, since the code depends on that for data reading.
As part of the initialization process the tutorial calls a function getNYSEDays, which retrieves all the days there was trading at the NYSE. Migration is straightforward using incanter’s read-dataset to read file into memory and then filter the required range.
Pay attention to the time-of-day set at 16 hours, the time NYSE closes, we’ll see it again in unexpected places.
Data Access
QSTK provides a helper class called DataAccess used for reading and caching stock prices.
As you see here there’s some data reading happening, we’re gonna take a look at these functions since we’ll need to write them from scratch.
We’re going to separate this in two functions, first reading symbol data from disk using again read-dataset and creating a hash-map indexed by symbol name.
Creating a symbols hash-map of incanter datasets
12345
(defn read-symbols-data"Returns a hashmap of symbols/incanter datasets read from QS data directory"[source-insymbols](let [data-dir(str *QS*"/QSData/"source-in"/")](reduce #(assoc %1%2(incanter.io/read-dataset(str data-dir%2".csv"):headertrue)){}symbols)))
Then if you take a look at voldata in a python repl, you can see pretty much what it’s doing
It’s grabbing the specified column volume or close from each symbol dataset, and it’s creating a new table with the resulting column renamed as the symbol.
All the get_data magic happens inside get_data_hardread, it’s an ugly piece of code making a lot of assumptions about column names, and even about market closing time. I guess you can only use this library for markets closing at 16 hours local time.
In this case Clojure shines, the original function is almost 300 lines of code. I’m missing a couple of checks but it’s not bad for a rookie, I think.
The helper function select-value is there in order to avoid an exception when trying to find stock data for a non existent date. Also the function returns :Date as a double since it’s easier to handle later for charting.
Charting
Charting with Incanter is straightforward, there a subtle difference with python since you need to add each series one by one. So what python is doing here charting multiple series at once
123
newtimestamps=close.indexpricedat=close.values# pull the 2D ndarray out of the pandas objectplt.plot(newtimestamps,pricedat)
We need a little function to solve it with Incanter. Each iteration gets reduced into the next with all the series accumulated in one chart.
creates multiple time-series at once
123456789101112
(defn multi-series-chart"Creates a xy-chart with multiple series extracted from column data as specified by series parameter"[{:keys[seriestitlex-labely-labeldata]}](let [chart(incanter.charts/time-series-plot:Date(first series):x-labelx-label:y-labely-label:titletitle:series-label(first series):legendtrue:datadata)](reduce #(incanter.charts/add-lines%1:Date%2:series-label%2:datadata)chart(rest series))))
Data Mangling
Incanter has a lot of built-in functions and helpers to operate on your data, unfortunately I couldn’t use one of the many options for operating
on a matrix, or even $=, since the data we’re processing has many nil values inside the dataset for dates the stock didn’t trade which raises an exception when
treated as a number, which is what to-matrix does, tries to create an array of Doubles.
There’s one more downside and it’s we need to keep the :Date column as-is when operating on the dataset, so we need to remove it, operate, and add it later again, what happens to be a beautiful one-liner in python
This attempts a naive normalization dividing each row by the first one.
1
normdat=pricedat/pricedat[0,:]
Or the daily return function.
1
dailyrets=(pricedat[1:,:]/pricedat[0:-1,:])-1
I ended up writing from scratch the iteration and function applying code.
Maybe there’s an easier way but I couldn’t think of it, if you know a better way please drop me a line!
Now normalization and daily-returns are at least manageable.
Normalization and Daily Returns
1234567891011121314
(defn normalize"Divide each row in a dataset by the first row"[ds](let [first-row(vec(incanter.core/$0[:not:Date]ds))](apply-rowsds(/ first-row)0(fn [nm](and (not-any? nil? [nm])(> m0))))))(defn daily-rets"Daily returns"[data](apply-rowsdata((fn [nm](- (/ nm)1))(vec(incanter.core/$(- i1)[:not:Date]data)))1(fn [nm](and (not-any? nil? [nm])(> m0)))))
Having the helper functions done, running of the tutorial is almost declarative.
If you wanna take a look at the whole thing together here’s the gist, I may create a repo later.
Please remember NumPy is way much faster than Clojure since it links BLAS/Lapack libraries.
The main point is there’s a semantic misunderstanding of what distribution of wealth is, confusing a statistical frequency distribution of income, with the transitive verb distribute.
As if the current distribution of wealth is the result of someone who decided to distribute it unfairly.
I pretty much agree on Paul Graham’s take on wealth creation, and think our focus should be on individuals being able to create more value for society and for themselves.
The semantic confusion may be just part of the reason we’re having the wrong conversation, about distributing, instead of being about affecting the distribution by creating.