The assessment of software quality is one of the most multifaceted (e.g., structural quality, quality-in-use, product quality, process quality, etc.) and subjective aspects of software engineering (since in many cases is substantially based on expert judgement). Such assessments can be performed at all almost of phases of software development (from project inception to maintenance) and at different levels of granularity (from source code to architecture). However, human judgement is: (a) inherently biased by implicit, subjective criteria applied in the evaluation process, and (b) its economical effectiveness is limited compared to automated or semi-automated approaches. To this end, researchers are still looking for new, more effective methods of assessing various qualitative characteristics of software systems and the related processes. In recent years we have been observing a rising interest in adopting various approaches to exploiting machine learning (ML) and automated decision-making processes in several areas of software engineering. These models and algorithms help to reduce effort and risk related to human judgment in favor of automated systems, which are able to make informed decisions based on available data and evaluated with objective criteria. The aim of the workshop is to provide a forum for researchers and practitioners to present and discuss new ideas, trends and results concerning the application of ML to software quality assessment. We expect that the workshop will help in: (a) validation of existing and exploring new applications of ML, (b) comparing their efficiency and effectiveness, both among other automated approaches and the human judgement, and (c) adapting ML approaches already used in other areas of science to software engineering problems.


    Abstract Submission Deadline: Jan 12nd, 2018
    Paper Submission Deadline: Jan 19th, 2018
    Notification: Feb 9th, 2018
    Camera ready: Feb 22nd, 2018
    Workshop: Mar 20th, 2018