Πλοήγηση ανά Συγγραφέα "Dritsa, Konstantina"
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Α Β Γ Δ Ε Ζ Η Θ Ι Κ Λ Μ Ν Ξ Ο Π Ρ Σ Τ Υ Φ Χ Ψ Ω
Τώρα δείχνει 1 - 2 από 2
- Αποτελέσματα ανά σελίδα
- Επιλογές ταξινόμησης
Τεκμήριο Speech quality and sentiment analysis on the Hellenic Parliament proceedings(2018-07-10) Δρίτσα, Κωνσταντίνα; Dritsa, Konstantina; Athens University of Economics and Business, Department of Informatics; Androutsopoulos, Ion; Spinellis, Diomidis; Louridas, Panagiotis“It's not what you say, but how you say it”. How often have you heard that phrase? Have you ever wished that you could take an objective and comprehensive look into what is said and how it is said in politics? Within this project, we examined the records of the Hellenic Parliament sittings from 1989 up to 2017 in order to evaluate the speech quality and examine the palette of sentiments that characterize the communication among its members. The readability of the speeches is evaluated with the use of the “Simple Measure of Gobbledygook” (SMOG) formula, partially adjusted to the Greek language. The sentiment mining is achieved with the use of two Greek sentiment lexicons. Our findings indicate a significant drop on the average readability score of the parliament records from 2003 up to 2017. On the other hand, the sentiment analysis presents steady scores throughout the years. The communication among parliament members is characterized mainly by the feeling of surprise followed closely by anger and disgust. At the same time our results show a steady prevalence of positive words over negative. The results are presented in graphs, mainly in comparison between political parties as well as between time intervals.Τεκμήριο Standing on shoulders or feet? An extended study on the usage of the MSR data papers(2020) Kotti, Zoe; Dritsa, Konstantina; Spinellis, Diomidis; Kravvaritis, KonstantinosThe establishment of the Mining Software Repositories (MSR) data showcase conference track has encouraged researchers to provide data sets as a basis for further empirical studies. The objective of this study is to examine the usage of data papers published in the MSR proceedings in terms of use frequency, users, and use purpose. Data track papers were collected from the MSR data showcase track and through the manual inspection of older MSR proceedings. The use of data papers was established through manual citation searching followed by reading the citing studies and dividing them into strong and weak citations. Contrary to weak, strong citations truly use the data set of a data paper. Data papers were then manually clustered based on their content, whereas their strong citations were classified by hand according to the knowledge areas of the Guide to the Software Engineering Body of Knowledge. A survey study on 108 authors and users of data papers provided further insights regarding motivation and effort in data paper production, encouraging and discouraging factors in data set use, and future desired direction regarding data papers. We found that 65% of the data papers have been used in other studies, with a long-tail distribution in the number of strong citations. Weak citations to data papers usually refer to them as an example. MSR data papers are cited in total less than other MSR papers. A considerable number of the strong citations stem from the teams that authored the data papers. Publications providing Version Control System (VCS) primary and derived data are the most frequent data papers and the most often strongly cited ones. Enhanced developer data papers are the least common ones, and the second least frequently strongly cited. Data paper authors tend to gather data in the context of other research. Users of data sets appreciate high data quality and are discouraged by lack of replicability of data set construction. Data related to machine learning or derived from the manufacturing sector are two suggestions of the respondents for future data papers. Overall, data papers have provided the foundation for a significant number of studies, but there is room for improvement in their utilization. This can be done by setting a higher bar for their publication, by encouraging their use, by promoting open science initiatives, and by providing incentives for the enrichment of existing data collections.
