GitHub Copilot is an in-editor extension for software development that makes suggestions to you as you code, and it is powered by a network language model called Codex which was trained on public code repositories on GitHub. Basically, it tries to finish your lines or even suggest entire blocks of code.
Nonetheless, it should be noticed that Copilot doesn’t really “get” code. Instead, a profound learning model conveys text that appears to fit the setting dependent on the given information. Likewise, Copilot doesn’t test the code it proposes, so it may not run as given.
It is an interesting concept, and obviously, this spawns all kinds of issues, especially because it was trained off of GitHub data, so there are privacy and security concerns. Also, some people think that their jobs are going to be taken away. What do you think?
The initial reaction of the developer community was positive, regarding the new tool as a new way of reducing work hours spent on repetitive coding. Copilot is even seen as a precursor to shift to comment-based programming, which is similar to the switch from machine language programming to high-quality programming languages in the 20th century.
As we said in the introduction, Copilot is trained on publicly accessible code/open-source repositories under various licenses. This casts doubt on the ethical and legal implications. There are also controversies based on this fact because copyright laws protect those repositories. Long story short- reading and using publicly accessible data is not a problem, but selling the AI-produced code back to the community of developers who gave the first information (data) appeared to involve a moral and legitimate issue.
As discussions encompassing Copilot erupted all around the Internet, some software development industry leaders declared that they would restrict their workers from using Copilot, worried that the use of lawfully unsound programming could land the organization in desperate results. Copilot is right now standing on the shaky ground concerning the implications.
Now, you may ask yourself if the code is good; As per OpenAI’s paper, Codex offers the correct response 29% of the time. What’s more, the code it composes is by and large ineffectively refactored. The explanation is a result of how language models work. They show how most people write and do not have a sense of what is (not) correct. Also, code on GitHub was by definition written by average programmers, so Copilot has to give its best guess as a realistic estimation concerning what those software engineers may compose in case they were composing the same file that you are. To make the most out of it, it is suggested to divide the code into smaller functions and provide meaningful function names, parameters, and docstrings.
There is one more stuff. Codex was not trained on code that was created one or two years ago, and considering this, it is missing out some libraries or recent versions and language features.Tools for creating code exist almost as long as the code itself, and they have been controversial since their commencement. With Copilot, we do not have upsides because we nearly always have to modify the code that’s created, and if we want to change how it works, we can’t just go back and change the prompt. Instead, we have to debug the generated code directly.
One of the good things is that your private code would not be shared with other users. They claim to use telemetry data to improve the model, including information about which suggestions users accept or reject. Speaking of telemetry data, we should say that GitHub Copilot extension collects usage information about events generated by interacting with the integrated development environment (IDE).
This information may include personal data such as User Personal Informations (see GitHub Privacy Statement). Then, GitHub shares this informations with Microsoft and OpenAI to improve this Copilot extension.A fair warning here would also be, GitHub Copilot reads the code you write while the software is running, and sends it to the server to have machines learn of it.
This makes the use cases of Copilot in commercial purposes questionable at best, and plain impossible at worst. Any good NDA you’ve signed will have a clause that you must not, in any case, share code with anyone outside of your organisation, which renders us unable to use Copilot in our day to day work.