Small demo project - paid (Position now filled)

Hi, I run a small software company and we have a demo to build for a client that involves optical character recognition. So image to text in English. Would anyone be interested? If so you can reply here or email to hello@imaginegames.co.uk

Thanks,
Katie

Just one thought from me:
I’ve quite recently made a demo project in cocos2d-x, where user can draw numbers in a rectangle. Then I convert image to open cv format and using SVM classifier from here: https://www.simplicity.be/article/recognizing-handwritten-digits/ it tries to recognize written number. From my experience it works pretty bad. There are few numbers, which are corrently recognized in 90% of times, but some are more like 20% or even less.
So, if you want to include characters, and what’s also making harder to recognize, whole words (or sentences) I don’t think this could a good method.
If you’ll find some more effective ocr technology let me know :slight_smile:

For the demo we just need to be able to write the answer to a times table question and recognise the result. I have seen video games from the early 1990s that managed that quite well. I have replied to a few emails but we are looking for help building an app for a client that can do this. Ideally within 28 days. The app needs to be able to recognise and respond quite quickly, ideally without going out to the internet.

You mean something like this?

Yes, this is the sort of thing we need to do though it can’t be in swift.

Actually I have had good luck asking my friends to hand write samples of various levels of neatness and slant and using a NN to learn. I packaged this learning up into a file and use opencv in my cocos2d-x app

I found this… it’s a bit beyond me to be honest though :slight_smile:
http://www.learnopencv.com/handwritten-digits-classification-an-opencv-c-python-tutorial/

Can either or both of you take this on?

You don’t need python at all.

It seems this method is accurate though right?? all the stuff about “deskewing” and “HOG descriptors” … maybe I need to read another book! Do you think this is the way to go?

You have to choose a method that you will understand in case of bugs or changes or an unusual circumstance. I know what I would do and I think it would be different that what others might do. I have an app that uses an Apple Pencil for handwriting and recognizing the input is having good success but I must still add data for varying handwriting and special cases.

If nobody is interested on here I have a contact at Cambridge University who knows more I could ever know. It’s seems machine learning is tough to get right.

You can do this without machine learning. Actually you almost have to if you want to stay local to the device and not send out any data. Meaning the data you ship with the app is it unless you update the app and redistribute to get more recognition data

1 Like

Hi, I have now taken someone on to do this.