@luvsuneja wrote:
Hope you all are doing well
I had a good call with the hiring manager of the logistics company where I am interviewing for a Data Science Intern position. He instructed me to work on the following problem and interview with him on Thursday.
So, the problem is about last mile delivery. Logistics companies such as DHL, UPS etc. can spend up to 35 % on last-mile deliveries. A key problem is the call center has to call the person to get the exact delivery location. This increases costs.
Instead, we can feed the address into a geocoder and get the coordinates. (Geocoder: A software which takes {address or points of interest} text as input and spits out latitude, longitude). This is still expensive as Google charges per API call. Obviously, we cannot make a new Geocoder (Google et al has already done a lot of research; Why reinvent the wheel?).
But, we can structure our software in such a way that can reduce the number of API calls to Google to reduce geocoding costs. We can save money by integrating our historical database (address-geocode records) with the geocoder.
The hiring manager suggested looking at solutions to:
Check if two addresses are similar (Eg. they may belong to the same building but have different apartment numbers). We can use historical geocode records in this case.
Since we have millions of address-geocode records, how do we structure or cluster them in a manner so that the search space, therefore, search time is reduced?
If Google’s geocoder fails to find a match; how can we modify the address such that we are able to get a match on the second try?
Currently, I am looking at string similarity algorithms such as Levenshtein, Needleman-Wunsch, TFIDF, Jaccard, Jaro-Winkler, Affine Gap etc.
Would be great to hear your expert opinions.
Posts: 1
Participants: 1