The goal of this project was to create a robot that would be able to reach a goal geolocation from any starting position while avoiding obstacles. Performing this operation with multiple goal locations, the robot is able to travel a predefined path. Robot operation is based on two modes: Cruise and Wall follow (for obstacle avoidance). Cruise mode uses readings from GPS and Compass while the wall follow mode uses distance information acquired from 5 ultrasonic sensors as well as compass readings. The main operation loop runs at 20Hz.
iOS Drag & Drop example is written in Objective-C and Swift 2.0. The idea is to perform an action once the circle is dragged and dropped into the goal area. If the circle is not dropped on the goal it’s returned to the starting position with animation. Pan gestures are accepted over the entire view, but the circle is constrained to the dragging area subview.
LGHorizontalLinearFlowLayout – UICollectionView flow layout subclass written in Objective-C and Swift 2.0 that supports custom page width and zoom/scale of center page/cell. Scale offset and minimum scale factor parameters can be used to fine tune the effect.
For years I’ve been working with many different teams using various workflows and version control systems. There is a great overview of different workflows (Centralized Workflow, Feature Branch Workflow, Gitflow Workflow and Forking Workflow) on Atlassian website. My experience is that Centralized Workflow is mostly used by very small teams working on small projects. The projects are usually developed using the Waterfall model. The independent requirements of the project are divided among developers and they are followed through the development stage. Developers use some of the core application layers (e.g. data layer), but other than that, most of the code a developer writes is not seen by other developers. The process is very simplified and there is not much overhead cost as in other workflow models, but it can have flaws in lower software quality due to the lack of code review.
UIStackView is introduced in iOS 9 and is very useful for laying out dynamic collection of views in horizontal or vertical axis. Whenever one of it’s managed view’s hidden property changes, the layout is updated to show/hide the managed views. To checkout more about UIStackView see the Apple Class Reference. There is also a great tutorial about UIStackView at tuts+.
It would be very nice to have a control like UIStackView that is available for iOS7 and above. I needed a way to create a dynamic collection of views so I found an interesting port of UIStackView – OAStackView.
Check it out: https://github.com/oarrabi/OAStackView
PromiseKit (http://promisekit.org/) is an Objective C and Swift implementation of Promises/A+ specification (https://promisesaplus.com/). By definition, promise represents the eventual result or error of an asynchronous task. Interaction with the result or error is done using the then method with the block that result or error is passed to as a parameter.
Mogenerator is a very useful command line tool used to generate NSManagedObject subclasses. For a given .xcdatamodel file Mogenerator will generate two classes for each entity. The first class, _MyEntity contains attributes and convenience methods of the entity and will be continuously overwritten to stay in sync with the data model. The second class MyEntity (subclasses _MyEntity) will never be overwritten and is a place to put custom implementation for that entity.
Recently I stumbled upon an interesting open source framework called OpenALPR (Automated License Plate Recognition). This seemed very interesting for trying out on the Raspberry Pi. The OpenALPR framework has a couple of dependencies that you have to download and compile first.
Recently I have received the Raspberry Pi, Model B. I wanted to try out the OpenCV framework on the Pi. After installing OpenCV and all dependencies I tested something similar to iOS OpenCV Sample I wrote a while back. The project shows detection of a yellow circular shaped objects, face and smile detection. The code is written in Python. With Raspberry Pi I have used a low cost usb web camera I had lying around.
Face tracking turret uses OpenCV framework for face recognition, the project is written in Python. The face center coordinate is sent via serial interface to Arduino. The coordinate is translated into horizontal and vertical angles that are used to position the servos on the pan/tilt mount.