..
Suche
Hinweise zum Einsatz der Google Suche
Personensuchezur unisono Personensuche
Veranstaltungssuchezur unisono Veranstaltungssuche
Katalog plus

Sensing & Sensibility

Research Area Sensing & Sensibility

The initiative brings together research in the field of sensing & sensor based technologies across the entire university. It aims at initiating interdisciplinary dialogue across humanities, technical and natural sciences and advance new forms of interdisciplinary collaborations. 

Through events, panels and shared projects, it asks how researchers from all disciplines can understand and design our sensor based society. 

More information can soon be found here: https://sensing.uni-siegen.de/ 

We are hiring! 2+1 year 1,0 fte PostDoc position on sensor media & algorithmic accountability

We are currently looking for a postdoc to join the media studies part of the team. The position is 1,0fte for 2+1 years. The postdoc would support us in organising the network initiative Sensing and Sensibility and realise the media studies component of the project "Organizing human und non-human cooperation – the case of cyber production management". Besides being part of Sensing & Sensility, the postdoc can participate in the very vital, interdisciplinary research culture focusing on digital media and methods at the University of Siegen, such as the activities of the Cooperative Research Center "Media of Cooperation", the Graduate School "Locating Media" and the Research Initiative "Transformations of the Popular".

More information on the job opening can be found here: job offer postdoc

If you have questions, please do not hesitate to contact Carolin Gerlitz carolin.gerlitz@uni-siegen.de  

  

 

Project: Organizing human und non-human cooperation – the case of cyber production management  

 
Project lead media studies: Dr. Marcus Burkhardt, Prof. Dr. Carolin Gerlitz
Project lead production management: Prof. Dr. Peter Burggräf 
Project lead Ubiquitous Design: Prof. Dr. Marc Hassenzahl 

 
Summary
As digital transformation continues, everyday technologies will fundamentally change: they will
become proactive, autonomous and more and more opaque for humans. Taking production
management as an example, this research project will examine how a cooperation between
humans and algorithmic agents can and ought to be designed. Potential designs will be examined
with regard to three potentially competing objectives: performance, satisfaction and accountability.
To this end, the three applicants will work together on an explorative study. In general, we will
create examples of different types of human-algorithm-cooperation and explore their impact on the
efficiency and effectiveness of the result, the work satisfaction and wellbeing of the humans
involved and societal and regulatory implications. While production management serves as an
example, the project will address broader issues of how to design human-algorithm-cooperation between the priorities of industry, workers, and society at large.My work explores the various intersections between new media, methods and economic sociology, with a specific interest in web economies, platform and software studies, digital methods, brands, value, topology, measurement, numeracy, social media, digital sociology, inventive methodologies and issue mapping online. I am interested in advancing new methodologies to study digital and platform cultures drawing on digital research methods and its intersection to other methodological approaches. Among my key questions is how value is generated from digital data and how quantification and numbers participate in life in digital media.
 
Structure
This project is divided into three main research fields. These research fields correspond with three
models we want to develop for the design of human and non-human cooperation. Each applicant
is responsible for one model which corresponds to his or her professional competence. Parallel to
the development of the individual models, interdependencies are revealed and cross-sectional
questions are answered. The aim is to evaluate all three models in a common setting (explorative
study). The main concern here is to consider the models not as separate from each other, but as
an integral part of the others models.
 
Performance Model (Peter BurggrКf, Chair IPEM)
Research question: Which combination of human and algorithmic agent leads to the most effective
and efficient product management system (division of labour)?
Conceptually, a production system can be divided into two interconnected instances: The executing
instance (blue collar) and the steering instance/production management (white collar) (Dyckhoff,
2003). Production management includes dispositive production factors like planning, monitoring
and control (Schuh & Schmidt, 2014). The higher the decision level is (from operative to tactical to
strategic), the less algorithmic decision support there is for a human production manager. This is
mainly due to decreasing predictability and increasing risk with higher decision levels (Dhar, 2016).
In this field of research, we want to test in particular areas where learning algorithms outperform
experienced production managers and where cooperation is most productive. Given the fact that
intelligent systems operate better on a certain domain, but are with little to no use outside of it
(OpenAI, 2016), we expect machines to be inapt for certain decision tasks.
We will develop a production simulation based on a real production setting – for this purpose our
Smarte Demonstrationsfabrik Siegen (SDFS) seems to be the ideal terrain – and train an algorithm
to master its rules, similar to the achievements of DeepBlue and AlphaGo. A sufficiently accurate
model of an organization is therefore imperative. The trained algorithm is then supposed to make
decisions in cooperation with a human. This scenario will be used for explorative research by
varying different parameters and checking the effects on the overall performance (e. g. lead time
or process cost). Examples for parameters are: division of labour, uncertainty or risk of the decision.
The effects of the parameter variations will be statistically evaluated (e. g. by factor analysis or
design of experiments). The critical question is the role of the human in this performance-oriented
cooperation: Will he/she be degraded to a physical execution of algorithmic decisions (back to
Taylorism) or will he perform meta-tasks, such as parameter monitoring (forward to New Work)?
Our goal is the technical-organizational optimum of the division of tasks between human
and algorithmic agents.
 
 
Satisfaction Model (Marc Hassenzahl, Chair UD)
Research question: How should we model the cooperation of human and algorithms in production
management to maintain or even increase human’s work satisfaction?
Human-algorithm cooperation must not only be optimized with regard to performance but also in
terms of providing meaningful work to all humans involved. So far, automation fundamentally
impacted work setting, leading to profound changes in the emotional and cognitive responses to
work. For example, classic automation forced people into supervisory control positions, where work
satisfaction is no longer created by participating in the production itself, but by solving complex
problems arising from failures of automation (Hancock, 2014; Klapperich & Hassenzahl, 2019;
Sheridan & Parasuraman, 2005). In this work setting, long stretches of monotony lead to extensive
boredom and rare severe failures to serious mental overload. Often this is treated as if a “natural
consequence” of automation, while in fact it is the consequence of the particular ways current
automation is designed (e.g., Frison et al., 2017; Klapperich & Hassenzahl, 2016). The current
introduction of algorithms into production planning offers the chance to explicitly conceptualize
appropriate relationships between humans and algorithms before the widespread adoption of the
technology. Instead of following the automation’s notion of completely substituting the human in
the process (which mostly fails), the leading model must be to determine meaningful forms of
human-algorithm-cooperation. The overarching question is: “How should human-algorithmcooperation
be designed to lead to fulfilling and meaningful work setting?” 
 
Feelings of successfully manipulating the environment, of agency and self-efficacy, are central to
human nature and wellbeing (e.g., Ryan & Deci, 2000). For a while now, approaches to technologydesign,
such as Experience Design (Hassenzahl, 2010) or Positive Design (Desmet, Pohlmeyer, &
Forlizzi, 2013), emphasize positive (enjoyable, meaningful) experience as crucial outcome of
technology use. In the work domain, the notion of meaningful work is widely discussed (e.g.,
Chalofsky, 2003). Steger et al. (2012) measured meaningful work through positive meaning (e.g.,
“I have a good sense of what makes my job meaningful”), meaning-making through work (e.g., “I
view my work as contributing to my personal growth”) and greater-good-motivations (e.g., “I know
my work makes a positive difference in the world”). In this study, high meaning correlated positively
with general wellbeing/life satisfaction (.49), job satisfaction (.62) as well as career and
organizational commitment (.70, .51).
 
It is not hard to see, how autonomous algorithmic agents may impact meaning and satisfaction.
Autonomous algorithmic agents have the potential to fundamentally question feelings of agency
and locus of control. For example, their opacity will make it difficult to perceive the outcome of joint
work as being fully under one’s personal control. While algorithmic agents imply autonomy and
thus force a cooperative model onto interaction, technology as actor remains fundamentally
different from the human as actor. For example, an algorithm will not truly “feel” competent, when
succeeding in a task. In human-human cooperation, a person might enjoy the successes of a
partner and can derive social value from it, such as pride or as enjoyment of the others gratefulness.
In human-technology cooperation the presumably social partner “technology” has no own feelings,
such as gratefulness, a human could refer to. Of course, one may simulate this as a part of the
algorithm’s design, however, it will remain pretension.
The main goal of this research field is to create a better understanding of the experiential
costs and potential strategies of modelling interaction with algorithmic agents along the line
of human-algorithm cooperation. Since algorithms may become an inevitable part of work
environment, the impact of their particular embedding into work must not only to be
scrutinized in terms of functional or efficiency gains, but also in terms of positively or
negatively impacting job satisfaction and wellbeing through its use.

Accountability Model (Carolin Gerlitz, Chair DMM)
Research question: How can production management systems be rendered accountable for human
actors and provide means for criticising algorithm decisions in everyday practice?
The development and deployment of AI-based decision-making and optimization systems raises a
number of epistemological, ethical and social issues. Critical research in the areas of algorithms
studies and science and technology studies has voiced concerns in regard to potential biases and
discrimination of algorithmic systems in general and AI in particular (Ziewitz 2016; Seaver 2017;
Neyland 2015; Gillespie 2016). In 2016, Kate Crawford and Ryan Calo pointed toward a blind spot
in current AI research: While such systems are increasingly built into the fabric of everyday life
“there are no agreed methods to assess the sustained effects of such applications on human
populations” (Crawford and Calo 2016, 311). To address this challenge Crawford and her
colleagues at the New York-based AI Now Institute proposed an auditing model for Algorithmic
Impact Assessment which is designed to provide a “mechanism to inform the public and to engage
policymakers and researchers in productive conversation” (Reisman et al. 2018: 5). Along similar
lines the European Commission recently released ethics guidelines that name seven abstract
requirements for the development of trustworthy AI: (1) human agency and oversight, (2) technical
robustness, (3) privacy, (4) transparency, (5) fairness, (6) societal wellbeing, (7) accountability
(European Commission 2019, 14f.). How such high-level recommendations can be translated into
actual socio-technical systems, however, remains an open research question.
 
The development of autonomous algorithmic decision systems for production planning and
management provides a research context in which theoretical as well as practical approaches can
be explored for providing human actors with means to contest and to (partially) control algorithmic 
decisions by rendering them interpretable, explainable, accountable and transparent. While
opening the black box of algorithmic systems and scrutinizing their functional logics is a useful
approach for critical research it evades everyday practices (Gray, Gerlitz, and Bounegru 2018). It
is thus necessary to design algorithmic agents that can be critically scrutinized in practice. The
goal is to gain a better understanding of the situated requirements for the interpretability,
explainability and accountability of autonomous algorithmic based decisions in production
management and to explore strategies for providing human users with means to monitor,
contest and intervene in algorithmic decision-making procedures. Building on a
reconstruction of state-of-the-art approaches to algorithmic accountability this research field
engages in participatory-collaborative interventions in the design of the performance model
(Marres and Gerlitz 2015; PЪchhacker et al. 2017, 2018; Dieter et al. 2019). The design strategies
for making algorithmic agents contestable will be explored in collaboration with the second research field.