This action will delete this post on this instance and on all federated instances, and it cannot be undone. Are you certain you want to delete this post?
This action will delete this post on this instance and on all federated instances, and it cannot be undone. Are you certain you want to delete this post?
This action will block this actor and hide all of their past and future posts. Are you certain you want to block this actor?
This action will block this object. Are you certain you want to block this object?
An interesting paper on using #LLM for automated code repair from Wei et al. https://arxiv.org/abs/2309.00608 #SE
Here (https://arxiv.org/pdf/2308.15276.pdf) is an excellent paper by Wu et al. about using LLMs for fault localization. It seems that LLMs are better at fault localization than SBFL techniques Jaccard, Tarantula, Ochiai, OP2, Dstar, MBFL and SmartFL by about 50%. #LLM #SE
An interesting paper on the usage of #LLM in #SE by Hou et al. -- "Large Language Models for Software Engineering: A Systematic Literature Review" https://arxiv.org/abs/2308.10620
LLMs can be transformative in software engineering, and this paper does a good job in reviewing the state-of-the-art.
some notes on using a single-person Mastodon server https://jvns.ca/blog/2023/08/11/some-notes-on-mastodon
A surprising (for me) opinion I heard at #usesec23 Usenix Security 2023; You can claim CVEs in your fuzzer paper so long as you found them during your research in developing the concerned fuzzer. In particular, there is no expectation of reproducibility of such CVEs specifically using the fuzzer in the paper. I note that CVEs are still considered a sort of real world touchstone for fuzzers by many reviewers. I wonder what the consensus of the community is about this.
Fuzzing is the primary tool for identifying vulnerabilities in applications. With a plethora of fuzzers available today, each boasting its unique exploration strategy, how do you determine which one aligns with your application? Especially considering that fuzzing is computationally expensive, making the right choice is crucial.
In our paper presented at Usenix Security 2023 (#usenix2023 #usenix), featured in Thursday's Track 6, we delve into this challenge. We demonstrate how mutation analysis, traditionally regarded as the gold standard for test suite evaluation, can be effectively applied to assess fuzzers. Moreover, we provide insights on mitigating the computational demands of mutation analysis through the smart evaluation of mutants. https://rahul.gopinath.org/publications/2023/04/26/systematic/
We ( @ccanonne@mastodon.xyz and I) are organizing the High School Fellowship 2023 from School of Computer Science, University of Sydney. Saturday was our first welcome event, attended by 50 high school students of high caliber and their parents.
Looking forward to the rest of the semester #usyd
Halp! My 6 year old is learning geometric shapes, and wants to know how to construct a _Left Angle Triangle_.
I want to setup a server where I can review and discuss software engineering and cybersecurity papers and tools with like minded folks. I am looking at https://bookwyrm.social as a potential candidate. Is there something like this already available?
The third issue of the 48th volume of Software Engineering Notes (SEN) is out.
https://dl.acm.org/toc/sigsoft/2023/48/3
This issue keeps posting on our established columns, like Peter Neumann’s Risks to the Public, Alex Groce’s Passages, and Bob Schaefer’s Academic Freedom and International Students, along with additional contributions.
SEN is edited by Jacopo Soldani