to write a critical analysis and an abstract based on the paper, writing homework help

  

1.Abstract:
The
Abstract must contain description of the topic, important applications of the
topic, brief description about previous literature present in the paper,
methodology adopted by the author, important results obtained and conclusions
derived from the results. The word count is limited to 250.2.Critical Analysis:With the introduction to the topic,
students are expected to critically analyze the literature part that contains
key findings of previous authors, methodology adopted in given paper,
parameters used for analysis, analyze the results obtained (from graphs /
tables) and important conclusions derived based on the result. Each topic has
to be separately presented in the form of paragraphs and the word count for
this part is limited to a range of 500 to 750.
cw1_paper.pdf

Unformatted Attachment Preview

Don't use plagiarized sources. Get Your Custom Essay on
to write a critical analysis and an abstract based on the paper, writing homework help
Just from $10/Page
Order Essay

The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
The Smart Vision Local Navigation Aid for Blind and Visually Impaired
Persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
Vision Laboratory, Institute for Systems and Robotics (ISR),
University of the Algarve (FCT and ISE), Faro, Portugal
{jjose, jrodrig, dubuf}@ualg.pt and elsio_farrajota@hotmail.com
doi : 10.4156/jdcta.vol5.issue5.40
Abstract
The SmartVision prototype is a small, cheap and easily wearable navigation aid for blind and
visually impaired persons. Its functionality addresses global navigation for guiding the user to some
destiny, and local navigation for negotiating paths, sidewalks and corridors, with avoidance of static
as well as moving obstacles. Local navigation applies to both in- and outdoor situations. In this article
we focus on local navigation: the detection of path borders and obstacles in front of the user and just
beyond the reach of the white cane, such that the user can be assisted in centering on the path and
alerted to looming hazards. Using a stereo camera worn at chest height, a portable computer in a
shoulder-strapped pouch or pocket and only one earphone or small speaker, the system is
inconspicuous, it is no hindrence while walking with the cane, and it does not block normal surround
sounds. The vision algorithms are optimised such that the system can work at a few frames per second.
Keywords: Vision Aid, Path Detection, Obstacle Avoidance
1. Introduction
Navigation of blind people is very arduous because they must use the white cane for obstacle
detection while following the front sides of houses and shops, meanwhile memorising all
locations they are becoming familiar with. In a new, unfamiliar setting they completely depend
on people passing by to ask for a certain shop or the closest post office. Crossing a street is a
challenge, after which they may be again disoriented. In a society in which very sophisticated
technology is available, from tracking GPS-RFID equipped containers in an area of hundreds of
metres to GPS-GIS car navigation to Bluetooth emitting the sound of movie trailers to mobile
phones in front of cinemas, one can question what it may cost to provide the blind with the most
elementary technology to make life a little bit easier. This technology may not replace the cane,
but should complement it: alert the user to obstacles a few metres away and provide guidance
for going to a specific location in town or in a shopping centre.
Different approaches exist to help the visually impaired. One system for obstacle avoidance
is based on a hemispherical ultrasound sensor array [22]. It can detect obstacles in front and
unimpeded directions are obtained via range values at consecutive times. The system comprises
an embedded computer, the sensor array, an orientation tracker and a set of pager motors.
Talking Points is an urban orientation system [24] based on electronic tags with spoken (voice)
messages. These tags can be attached to many landmarks like entrances of buildings, elevators,
but also bus stops and busses. A push-button on a hand-held device is used to activate a tag,
after which the spoken message is made audible by the device’s small loudspeaker. iSONIC [16]
is a travel aid complementing the cane. It detects obstacles at head-height and alerts by
vibration or sound to dangerous situations, with an algorithm to reduce confusing and
unnecessary detections. iSONIC can also give information about object colour and
environmental brightness.
GuideCane [26] is a computerised travel aid for blind pedestrians. It consists of a long handle
attached to a sensor unit on a small, lightweight and steerable device with two wheels. While
walking, the user holds the handle and pushes the GuideCane in front. Ultrasonic sensors detect
obstacles and steer the device around them. The user feels the steering direction through the
handle and can follow the device easily and without conscious effort. Drishti [20] is an in- and
– 362 –
The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
outdoor navigation system. Outdoor it uses DGPS localisation to keep the user as close as
possible to the central line of sidewalks. It provides the user with an optimal route by means of
its dynamic routing facility. The user can switch the system from out- to indoor operation with a
simple vocal command which activates a precise ultrasound positioning system. In both cases
the user gets vocal prompts which alert to possible obstacles and which provide guidance while
walking about.
CASBliP or Cognitive Aid System for Blind People [13] was a European Union-funded
project. The main aim was to develop a system capable of interpreting and managing real-world
information from different sources in order to improve autonomous mobility. Environmental
information from various sensors is acquired and transformed into enhanced images for visually
impaired users or into acoustic maps via headphones for blind users. Two prototypes were
developed for the validation of the concepts. The first was an acoustic prototype containing a
novel time-of-flight CMOS range-image sensor mounted on a helmet, in combination with an
audio interface for conveying distance information through a spatialised sound map. The second
was a real-time mobility-assistance prototype equipped with several environmental and user
interfaces for safe in- and outdoor navigation.
SWAN or System for Wearable Audio Navigation is a project of the Sonification Lab at
Georgia Institute of Technology [27]. The core system is a wearable computer with a variety of
location- and orientation-tracking technologies, including GPS, inertial sensors, pedometer,
RFID tags, RF sensors and a compass. Sophisticated sensor fusion is used to determine the best
estimate of the user’s actual location and orientation. Tyflos-Navigator is a system which
consists of dark glasses with two cameras, a portable computer, microphone, earphones and a
2D vibration array [3]. It captures stereo images and converts them into a 3D representation.
The latter is used to generate vibration patterns on the user’s chest, conveying distances of the
user’s head to obstacles in the vicinity. The same authors presented a detailed discussion of
other relevant projects concerning navigation capabilities [2].
Similar initiatives exploited other sensor solutions, for example an IR-multisensor array with
smart signal processing for obstacle avoidance [1] and a multi-sonar system with vibro-tactile
feedback [5]. One system is devoted to blind persons in a wheelchair [7]. Information of the
area around the wheelchair is collected by means of cameras mounted rigidly to it. Hazards such
as obstacles, drop-offs ahead of or alongside the chair, veering paths and curb cuts can be
detected for finding a clear path and maintaining a straight course [18]. All camera information
can be combined with input from other sensors in order to alert the user by synthesised speech,
audible tones and tactile cues.
From the overview presented above we can conclude that technologically there are many
possibilities which can be exploited. Some are very sophisticated, but also very complex and
likely too expensive for most blind persons who, in addition to having to deal with their
handicap, must make both ends meet financially. Moreover, ergonomically most may prefer not
to wear a helmet or to use other visually conspicuous devices which set them apart. Many
previous initiatives were very ambitious in the sense that information from many sensors was
integrated for solving most problems one can imagine. An additional aspect is that complex
systems are difficult to assemble and integrate, and they require maintenance by professional
technicians. For these reasons the project “SmartVision: active vision for the blind,” funded by
the Portuguese Foundation for Science and Technology, is developing two separate modules for
global and local navigation which can be integrated if the user desires this.
An initiative similar to the Portuguese SmartVision project is the Greek SmartEyes project
[25]. It also addresses global navigation using GPS with a GIS. Vision by two chest-mounted
cameras is used to obtain a disparity map for detecting open space and obstacles. This is
complemented by two ultrasound sensors mounted next to the cameras.
SmartVision’s functionality for local navigation is very restricted: (1) only path tracking and
obstacle detection, and (2) only the space a few metres in front of the user is covered, which is
best done by using one or two miniature cameras. Ideally, the cameras – but also a CPU and
earphone – could be mounted in dark glasses as in the Tyflos-Navigator system [3]. However,
many blind persons are continuously and unconsciously turning their head while focusing on
– 363 –
The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
different sound sources. As a consequence, the user should learn to control his head, which may
be very difficult and imposes yet another physical and even perceptual limitation on the user, or
image processing becomes very complicated because of sudden and unpredictable camera
motions. For these reasons the camera will be attached at chest height, as is done in the
SmartEyes project [25], also taking into account that blind persons have learned not to sway
much with their body while walking and swaying the white cane in front of them.
As mentioned above, the SmartVision system has two modes of operation. The first, global
navigation, employs a GIS with GPS and other localisation devices like active RFID tags for
going from some location to a certain destiny [9]. Here we concentrate on local navigation, for
centering on paths and in corridors while negotiating both static and moving obstacles. The area
covered is in front of the user and just beyond the reach of the white cane, such that the system
can alert the user to looming obstacles before his white cane will touch – or miss – them. To
this purpose the user is equipped with a stereo camera attached at chest height, a portable
computer, and only one earphone such that normal ambient sounds are not blocked; see Fig. 1.
Instead of using a blocking earplug, a miniature speaker can be worn behind one ear. The
cameras can be cheap webcams which are mounted in a very small tube, and the computer can
be worn in a shoulder-strapped pouch or pocket. Both tube and pouch can be made of or
covered by a material or fabric which matches the user’s clothes.
Figure 1. Illustration of the prototype with stereo camera, portable computer and earphone.
The processing chain is depicted in Fig. 2. Although blind users have learned not to sway
much their body while walking and swaying the white cane in front of them, the camera
attached at chest height will not be very stable over time, i.e., there are cyclic pan and tilt
oscillations. Therefore, after a few initial frames the optical flow will be clustered into overall
frame motion and object motions. Frame motion will be filtered for motion prediction in order
to stabilise new frames such that path detection (in a path-detection window) and detection of
static obstacles in front on the path (in an obstacle-detection window) can be adapted in order to
free CPU time.
Until here all processing is done by using only one of the two cameras, for example the left
one. Then, stereo disparity can be used to estimate distances of static and moving obstacles on
the path, as indicated by the left red arrow in Fig. 2. The left frame has already been processed
for optical flow on the basis of a compact image representation for solving the correspondence
problem of successive frames in time. Since solving the correspondence problem in stereo can
be done using the same image representation, the additional processing for distance estimation
only involves computing the image representation of the right frame, but only within the pathdetection or even the obstacle-detection window in order to limit CPU time. In addition,
distance estimation is only required when an obstacle has been detected, and this information is
used to modulate the signals of the user interface: the right red arrow in Fig. 2.
Basically, the user interface can create three alerts: alert P for centering on the path, and
alerts SO and MO for static and moving obstacles. One solution is to use sound synthesis, for
example a pure A tone of 440 Hz for alert P which may increase or decrease in frequency and in
volume when the system advises to correct the heading direction to the left or to the right. The
spectrum and volume of the sound can also be modulated in the case of detected obstacles, or
static and moving obstacles may be indicated by different chirps or beeps. An alternative is to
– 364 –
The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
use text-to-speech synthesis with a limited set of small messages. Different solutions are being
tested by blind persons in order to find the best one.
It should be stressed that we assume sufficient ambient illumination for image processing. In
the case of very low light levels, for example outdoor during the night, special histogram
equalisation is required [23]. Also, algorithms for path and obstacle detection are similar to
those used for robot navigation in corridors [28], although our algorithms are optimised for
running on a small portable computer.
The rest of this article is organised as follows. In the next section we describe path detection,
the path detection window, the adapted Hough space and border detection. In Section 3 the
detection of static obstacles within the obstacle detection window is explained. Section 4 deals
with optical flow and detection of moving objects. Final conclusions are presented in the last
Section 5.
Figure 2. Block scheme of the processing. At right the user interface with sounds and/or speech.
2. Path Detection
In the SmartVision project, a stereo camera (Bumblebee 2 from Point Grey Research Inc.) is
fixed to the chest of the blind, at a height of about 1.5 m from the ground. Results presented
here were obtained by using only the right-side camera, and the system performs equally well
using a normal, inexpensive webcam with about the same resolution. The resolution must be
sufficient to resolve textures of the pavements related to possible obstacles like holes and loose
stones [6] with a minimum size of about 10 cm at a distance of 3 to 5 m from the camera. The
first metres are not covered because of the height of the camera; this area is covered by the cane
swayed by the user. Detection of path borders is based on: (a) defining a Path Detection
Window (PDW) where we will search for the borders in each frame; (b) some pre-processing of
the frame to detect the most important edges and to build an Adapted Hough Space (AHS); and
(c) the highest values in the AHS yield the borders.
2.1. Path Detection Window PDW
Input frames have a fixed width W and height H. Let HL denote the horizon line close to the
middle of the frame. If the camera is exactly in the horizontal position, then HL = H/2. If the
camera points lower or higher, HL will be higher or lower, respectively; see Fig. 3. The borders
of the path or sidewalk are normally the most continuous and straight lines in the lower half of
the frame, delimited by HL. At the start, HL will be assumed to be at H/2, but after five frames
– 365 –
The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
the height of HL is dynamically computed on the basis of previous camera frames after
detection of the path borders and the corresponding vanishing points; see below.
Figure 3. From left to right: camera pointing up, horizontally aligned, and pointing down. The Path
Detection Window is highlighted in the images.
2.2. Adapted Hough Space AHS
The Canny edge detector and an adapted version of the Hough transform are used for the
detection of the borders and the vanishing point. In order to reduce CPU time, only gray-scale
information is processed after resizing the window to a width of 300 pixels using bilinear
interpolation, maintaining the aspect ratio of the lower part of the frame delimited by HL. Then
two iterations of a 3×3 smoothing filter are applied in order to suppress noise.
The Canny edge detector [4] is applied with  = 1.0, which defines the size of the Gaussian
filter, in combination with  = 0.25 and  = 0.5 which are the low and high thresholds for
hysteresis edge tracking. The result is a binary edge image with a width of 300 pixels and
variable height around 225 pixels in case of the Bumblebee 2 camera. The left part of Fig. 4
shows one original frame together with the resized and lowpass-filtered PDW and detected
edges below.
The borders of paths and sidewalks are usually found to the left and to the right, assuming
that the path or sidewalk is in the camera’s field of view; see e.g. Fig. 4. We use the Hough
transform [10] to search for lines in the left and right halves of the PDW for border candidates,
also assuming that candidates intersect at a vanishing point.
As we want to check straight lines in the two halves of the window using polar coordinates 
and , we use a different reference point. Instead of using the origin at the top-left corner, we
use a new origin at the bottom-centre; see the right part of Fig. 4. This simplifies the processing
of the left and right image halves and results in the adapted Hough space AHS in polar
coordinates as shown in the bottom-right part of Fig. 4. As for the normal Hough space, AHS is
a (, ) histogram which is used to count co-occurrences of aligned pixels in the binary edge
map (x, y). However, there are two differences. First, almost vertical and horizontal edges
cannot be path borders. Hence, the Hough space is restricted to 20 − 70° and to 110−160° on
the corresponding sides. This yields a reduction of CPU time of about 30%.
Second, longer sequences of edge pixels count more than short sequences and not-connected
edge pixels. To this purpose we use a counter P which can be increased or reset. When checking
each pixel of the edge map for a projected line with a certain angle and distance to the new
origin and find the 1st ON pixel, P=1 and the corresponding AHS bin will be incremented by 1.
If the 2nd pixel is also ON, P increments by 2 to 3 and the bin will increment by 3, and so on. If
a next pixel is OFF, P is reset to 0 and the bin is not modified. In other words, a run of 
connected edge pixels has P values of 1, 3, 5, 7, etc., or  =  + 2 with  = 1, and will
contribute  to the AHS bin. An example of an AHS is shown in the right part of Fig. 4
– 366 –
The SmartVision local navigation aid for blind and visually impaired persons
João José, Miguel Farrajota, João M.F. Rodrigues, J.M. Hans du Buf
International Journal of Digital Content Technology and its Applications Vol.5 No.5, May 2011
together with magnified regions (bottom). The maxima belonging to …
Purchase answer to see full
attachment

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code ESSAYSHELP