
You are invited to take part in a research study conducted by Jie Chen, a graduate student under the direction of Professor Hong Guo and Professor Xiqing Sha in the W. P. Carey School of Business, Department of Information Systems, at Arizona State University.
The purpose of this study is to examine how varying levels of transparency in generative AI (GenAI) systems influence users' productivity, perceived trustworthiness, and perceptions of performance risk when interacting with an AI assistant.
If you agree to participate, you will:
The total time is about 30 minutes.
Important: Please do not include any personal identifiable information (such as your name, address, or other identifying details) in any responses or content entered into the generative AI career assistant. For the duration of this study, you should not use any AI tools other than the one provided as part of the experiment.
The maximum bonus point is $10. You will receive a baseline $5 added to your final payment upon completion of the study. Additional $5 will be granted if your report ranked within the top 20 among the submitted students.
Your participation is entirely voluntary. You may skip any question or stop participating at any time without penalty. Please note that data you enter into the generative AI career assistant cannot be withdrawn once submitted, as it becomes part of the AI interaction logs that cannot be separated from system-level data. You must be 18 years or older to participate.
The results of this study will only be reported in aggregate form (group summaries). Individual responses will not be identified in any reports, presentations, or publications. De-identified data collected as part of this study may be included in replication files that are sometimes required by academic conferences or journals. These files will not contain any names, IDs, or other identifying information, and will only be used for scientific replication and verification purposes. Please do not include any personal identifiable information into the generative AI career assistant.
Risks are minimal. There is also a small privacy risk related to online data collection.
There are no major direct benefits. You may gain experience exploring career information with an AI tool, and your participation will help researchers design more transparent and trustworthy AI systems.
Your information will be kept confidential. Data will be de-identified and stored securely on ASU servers. Identifiable information (such as emails) will only be used for account access and will not be linked to your responses. Audio recordings, if collected, will be stored securely and de-identified before analysis. Results will only be reported in aggregate form in publications, presentations, or reports. Please note that data entered into the generative AI career assistant may be used to help train or improve the AI tool. All such data will be de-identified before analysis and stored securely.
If you have questions about this study, contact Jie Chen at jchen596@asu.edu or Professor Hong Guo at hguo@asu.edu, or Professor Xiqing Sha at Xiqing.Sha@asu.edu.
If you have questions about your rights as a research participant or feel you have been placed at risk, you may contact the Chair of the Human Subjects Institutional Review Board at Arizona State University, Office of Research Integrity and Assurance, at (480) 965-6788. Our study number is "STUDY00022982".
By clicking "I agree", you agree to the consent form and are willing to participate in this study.