question archive Topics may range from algorithmic systems (social media platforms, risk assessment tools, facial recognition, financial algorithms, etc

Topics may range from algorithmic systems (social media platforms, risk assessment tools, facial recognition, financial algorithms, etc

Subject:CommunicationsPrice:16.86 Bought3

Topics may range from algorithmic systems (social media platforms, risk assessment tools, facial recognition, financial algorithms, etc.) to non-algorithmic ways of sorting and evaluating people and things (Yelp!, IQ tests, SAT tests, debates around identity-sorting mechanisms such as race and gender, and so on).

MAKE SURE TO USE TWO READINGS LISTED BELOW

  • Massimo Mazzotti (2017). “Algorithmic life,” Los Angeles Review of Books, January 22.
  • Tarleton Gillespie (2016). "Algorithm," in Ben Peters (ed.) Digital Keywords. Princeton: Princeton University Press.
  • Ian Hacking (2006). “Making up people,” London Review of Books, August 17
  • Nick Seaver (2012). "Algorithmic recommendations and synaptic functions," Limn

 

pur-new-sol

Purchase A New Answer

Custom new solution created by our subject matter experts

GET A QUOTE

Answer Preview

Personal Injuries: An Insider versus Outsider’s Perspective

David M. Engel, in the article The Oven Bird’s Song: Insiders, Outsiders, and Personal Injuries in an American Community, explores the effect of social values on a tightly-knit community. Occasionally, the author uses insight from other large metropolitan areas, such as Chicago, to illustrate the stark differences between the behavior of persons in Sanders and other places. The purpose of the article is to demonstrate how community expectations influence individual behavior and how these two dimensions affect legal practice in a tightly-knit community.

In a metropolitan area, there are thousands of people, the majority of whom were not born there but have since relocated for education, jobs or the convenience the area affords them or their families. Such communities are mainly made up of strangers, with each person only being acquainted with a relatively small circle of friends, family members, and workmates. In a small, tightly-knit community, on the other hand, all the members know each other, or at least know some members of every household. Such communities are mainly made up of a comparatively small number of persons. The value systems in these two community systems vary considerably, which affects the members’ attitude, perception, and overall behavior to each other and the outside world, as demonstrated in this article.

The author makes a convincing case of how the local value system has progressively changed Sander county residents’ worldview and perception of the legal and justice systems. The author uses the example of the pursuit of justice following personal injuries in Sander. In metropolitan areas, personal injury litigations are a goldmine, especially to overly litigious plaintiffs and their attorneys. Sander County, on the other hand, has consistently recorded the lowest incidence of personal injury litigations (Engel, 1984, p.552). County residents described persons who pursued legal means to settle personal injury cases as avaricious and exploitive (Engel, 1984, p.553). According to Engel (1984, p.553), residents who sued for personal injuries were also considered trouble makers; this treatment was applied uniformly to all residents regardless of their social status. Engel (1984, p.554) further posits that the rationale behind this perception of personal injury among Sander County residents was because of culturally conditioned ideologies of what qualifies as an injury and how such occurrences are addressed. The stigma and resentment expressed by the community against plaintiffs in personal injury cases discourage individuals from pursuing this course of action.

Facial Recognition

Facial recognition algorithms use precise software to identify and compare faces of individuals to the stored template for identification purposes. The technology utilizes unique facial geometry and critical features to recognize individuals. Facial recognition mainly involves three critical processes; face detection, face capture, and face match. Face detection entails isolating a target from a collection by comparing their facial geometry using features such as eye spacing, bridge of the nose, and the contours of the lips, ears, and chin. Face capture then transforms the information generated from face detection into a series of data points obtained from the individual’s facial features. The last phase, face match, then compare any other set of data points, the face, with the stored template to determine whether or not they belong to the same person. Facial recognition algorithms serve two critical purposes in biometrics-identification and authentication. While identification addresses the question ‘who are you?’ authentication determines whether the individual is who they say they are.

Facial recognition software is only as accurate as the algorithm on which it is based. According to Gillespie (2016), an algorithm refers to a prescribed series of steps to organize and explore data to achieve the desired outcome. In facial recognition, the desired result is authentication and verification of identities using facial geometry. Gillespie (2016) posits that the algorithm is based on a model. A model, in this context, refers to a computational representation of a problem and its solution. A facial recognition model is designed to give results that show the greatest probability of similarity between two series of data and can, therefore, be conceptualized as a procedure for executing the operationalized task.

Facial detection algorithms have come a long way since it was first conceived by Bledsoe, Wolf, and Bisson in the 1960s (Nec, 2021). Between 1964 and 1965, the three pioneers initiated a project to recognize the human face using computer models. The initial project entailed identifying distinguishable landmarks on the face, such as eye spacing, that can be used to identify faces (Nec, 2021) accurately. The identified landmarks were then mathematically rotated using computers to compensate for variations in the pose. The distances between landmarks were also automatically computed to facilitate the comparison of images for authentication purposes (Nec, 2021). The initial facial recognition algorithms had severely limited capabilities due to the lack of prerequisite technology. Nonetheless, Bledsoe, Wolf, and Bisson demonstrated that facial recognition is a viable biometric benchmark.

Algorithms are a process for solving problems, as highlighted herein. Scientists calculated the exact location of a celestial body at a point in time using mathematical formulas before the age of computers (Mazzotti, 2017). Such mathematical formulas fit the definition of an algorithm despite not being mechanized.  According to Mazzotti (2017), digital algorithms require that instructions be coded into a computer program or mechanized. As mechanized instructions, algorithms were in use long before computers were first conceptualized. Historical applications of algorithms involved both manual and mechanized approaches. For example, during World War II, 200 women were trained on ballistic calculations for use during the war (Mazzotti, 2017). This algorithmic process was later coded and automated by ENIAC in the first electronic general-purpose computer. Since then, algorithms have metamorphosed into advanced computer programs that use artificial intelligence and machine learning to generate more accurate solutions to specified problems.

Advances in algorithms have allowed scientists to improve facial recognition software progressively. These advances have enabled engineers to optimize the performance of facial recognition systems by tweaking their speed under varying loads to achieve the desired level of computational elegance. According to Gillespie (2016), despite all these advances, algorithms do not produce a single and certifiable outcome. Instead, even the most advanced algorithms produce a list of probable outcomes. The implication of this form of results is that in facial recognition, the algorithm only produces a series of results ranked in terms of their probability.

Advances in data science have culminated in machine learning (ML) and artificial intelligence (AI) technologies. Machine learning and AI are the next big thing in algorithms and facial recognition since they bolster their capability and accuracy. According to Gillespie (2016), ML allows developers to train the algorithm using a vast database containing certified data points. For example, in facial recognition, the software is trained to identify images of human faces from any others correctly. After the training, the algorithm is run on the data to allow it ‘learn’ how to pair a particular query with the appropriate result (Gillespie, 2016). Facial recognition algorithms learn how to isolate images with a human face from those without.

The efficacy of ML is contingent on the similarity between the ‘learning’ and actual data in the wild (Gillespie, 2016). For example, a facial recognition algorithm must be trained using typical human faces to learn how to identify and match faces autonomously. Training an algorithm using data that is different from that available in the ‘wild’ is counterproductive. Gillespie (2016) asserts that differences in the training and actual data are a significant hindrance in using ML to bolster the performance of algorithms. For example, if the training data for a facial recognition algorithm comprises of faces from only one gender, such a program would fail catastrophically in the wild due to the presence of another gender with distinctly diverse facial geometry.

No algorithm is perfect. Often, some parameters that may not have been present during training may emerge in the field (Gillespie, 2016). Alternatively, data points that may have been overlooked or scrubbed during training may arise in the wild. When this happens, the algorithm underperforms and must be tweaked to improve its accuracy based on the new parameters. Fortunately, improving an algorithm does not require a complete restructuring (Gillespie, 2016). Instead, the designers can tweak the initial instructions to fine-tune it according to the new parameters. Fine-tuning computational instructions can also involve dialing up or down some parameters based on the prevailing data.

ML culminates in developing a highly capable algorithm that can be deployed in various settings in the form of an application. One such application is facial recognition. In this era of surveillance, facial recognition is among the most potent intelligence-gathering algorithms. While the majority’s interaction with facial recognition, and the underlying algorithm, is limited to security verification and authentication, other entities, such as the government, have unlimited applications of this technology. 

AI and ML have reshaped how most humans interact with most technologies. For example, deep learning, through AI, unlocks inherently more capabilities in facial recognition algorithms, allowing it to detect, track, and match faces besides translating conversations in real-time. Subsequently, facial recognition systems are becoming increasingly more accurate and potent. Facial recognition has exponential applications in the contemporary world. Law enforcement, for example, uses facial recognition to combat crime and terrorism. Facial recognition is used when issuing national identification documents, at border and police checks, and in CCTV surveillance systems. The technology can also be used in health to detect genetic disorders, track adherence to medication, or support pain management, among other use cases. Facial recognition is also useful in banking. All these applications mean facial recognition, and therefore the underlying algorithm can only get better. As the use cases and demand increase, and as more data is available and technology improves, facial recognition will have more applications than ever imagined.