News
Protecting Australia: Why we need a cyber strategy
Jul 18, 2023 • 1 min read
Team Downer's John Lawrance recently facilitated a session with UNSW called 'Australia: Why do we need a cyber security strategy'. The panel featured the Chief Strategy Officer at CyberCX, Alastair MacGibbon, the former Secretary of the Department of Finance, Jane Halton AO PSM, and Professor Lyria Bennett Moses the Director of Allen's Hub for Technology, Law and Innovation at UNSW.
The transcript is below.
John Lawrance:
Welcome, and thank you for joining us on the second of our webinar series on privacy in the 21st century. I am your host, John Lawrance from Downer Defence Systems. The purpose of this webinar titled: “Australia, Why Do We Need a Cyber Security Strategy” is to explore the implications of cyber security and what this means for government industry and as individuals. Downer is helping customers protect their information communications and information technology, operational technology and facilities from threats.
Downer Defence Systems’ integrated cyber security approach ensures the coordination of security activities across all business functions. This includes an end-to-end holistic approach with a focus on priorities including people, processes, technology, data and facilities. Please note that participants can send questions for the panel throughout this webinar using the chat function which will be moderated to ensure that the comments are respectful.
So, with me today are four panellists. In alphabetical order, we have got Alastair MacGibbon, Chief Strategy Officer. Alastair was recently the National cyber security Adviser, Head of the Australian Cyber Security Centre, Australian Signals Directorate and Special Advisor to the Prime Minister on cyber security. He has a private sector and government background and has served as the government eSafety Commissioner and as the general manager of security for Dimension Data.
Alastair helped establish the Australian Federal Police's high-tech crime centre during his long stint with the Australian Federal Police, and also previously worked at eBay as the Senior Director of Trust, Safety and Customer Support, and as the local CEO of Infosec certification not for profit CREST Australia.
CyberCX has a workforce of over 500 of Australia's most talented cyber security professionals and a national footprint of over 20 offices across Australia.
Jane Halton AO PSM. Jane Halton is a former secretary of the Australian Department of Finance with extensive experience in finance, insurance, information technology, risk management, human resources, health and ageing, sport, and public policy. Jane is currently chair and member of numerous boards and committees including the National COVID-19 Coordination Commission. Jane has an honorary degree by the Faculty of Medicine at University of New South Wales, an Order of Australia in 2015, and Public Service Medal in 2002.
Also, with us is Professor Lyria Bennett Moses. Lyria is the Director of the Allen's Hub for Technology, Law and Innovation and a professor of law at the University of New South Wales. Her research explores issues around the relationships between technology and law, including the types of legal issues that arise as technology changes. How these issues are addressed in Australia and other jurisdictions and the problems of treating technology as an object of regulation. Lyria is a member of editorial boards for Law and Context, Technology and Regulation, and Law Technology and Humans.
And last but not least, Edmund Leong is the Security Practice Lead at Downer Defence Systems, which supports our government and critical infrastructure clients, offering secure citizen services, including sovereign national security, transportation and utilities. A passionate advocate for business-driven technology outcomes, Ed listens and engages with these organisations to help them with risk, proportionate cyber security to enable their businesses to be more successful.
Wow, it is a lot. It is a very impressive profile. So, thank you all for joining us today. To get things started, I will begin with you, Jane, if that's okay. We have a question here. As a former departmental secretary, and now a board member of a top 100 publicly listed company, is business doing enough to protect critical systems and infrastructure from cyber threats? And what effect will the proposed regulation have on business, especially small to medium enterprises, given the private sector owns the majority of Australia's digital infrastructure?
Jane Halton:
It’s a really good question, and I think the thing we need to understand in this environment is, and I’m sure we do, is that the threat level obviously just continues to escalate. I think, the government's strategy is timely and very welcome. Because what it does, I think, is remind those of us who are connected to large corporates, that this is an ongoing issue, requiring constant vigilance and increasing our posture in relation to not just bad actors, but basically, the backyard and backroom hacker all the way through to sophisticated states.
But I think importantly, for small to medium sized businesses, but also actually for Mums and Dads, and kids, and everybody who actually has a device that connects to the internet, that these are real threats. And so I think the small and medium sized enterprise and a number of parts of this strategy obviously will be voluntary, but it’s a very active reminder that you don’t want your business, your reputation and probably your finances compromised through some sort of a cyber incident.
This is of course, an updated strategy that we're seeing here, and that updated strategy, I think rightly covers the waterfront in terms of what it will take to make sure that everything from our critical infrastructure through to our sole traders and our Mums and Dads who operate small businesses, that they, firstly, have it drawn to their attention that this is a real risk, and secondly, it provides mechanisms and places you can go for advice. It’s really a significant issue. But I do think this is the right step.
John Lawrance:
Thank you. Yes, and there is quite a lot of press about the cost to society. I think there is one article here, which Australians lost $634 million in 2019 alone, and the national economy, the total cost is something like $29 billion a year. So, certainly, a big issue. Alastair, given the economic and security benefits of building a sovereign cyber security capability, how can Australia develop a world class cyber security workforce? What role can government play in fostering this development and how important is education?
Alastair MacGibbon:
Well, there’s a big difference between 2020 and 2016 when the first National Cyber security Strategy was launched by Malcolm Turnbull. If we look at 2016, it talked about the opportunity of cyber security as much as it did about cyber threat. The inclusion of what I would call incubation type of money went into various things.
It funded AustCyber, which is part of the growth network part of government and it funded the Cyber Security CRC[1] in that year or following the launch of that strategy. If we take 2020, we see a lot more venture capital and a lot more private equity in Australia's cyber security market, which shows quite a significant maturation in a short time. Ours is a private equity backed company, for example. There are many now, venture backed cyber security solutions, companies. I was on the phone to three of them just this morning, that would have been unheard of even a couple of years ago.
So, the market is more mature. When we talk about education. There is no doubt that universities play a role. UNSW, both in its Kensington campus, and at ADFA play a significant part of helping that education space, and so do several other Universities. I would say that on the whole, the assessment is, in terms of the cyber security market, as an organisation, at CyberCX we are up to nearly 600 professionals in cyber security. As I look across the road here at the Cyber security Centre that I used to run, we were 400 to 500 people. It was always tough to find people, it is still tough to find people, and the experts say there's anything from - you can find a stat to prove anything – 5,000 to 20,000 people that we are down in the market, whatever it is, what it means it's tough to find people.
I am not sure that we have yet met what is required, therefore, in churning people out. Whether it is the military that trains… that takes a person off the street and trains them in a particular area, they were always a good source of a certain type of people. Whether it is the Signals Directorate, that churns out another type of person. Whether it is the private sector actually training their own staff, which has not been something they've done, I think, to the detriment of the industry. The need for micro credentialing and the more polytechnic and sort of trade type training.
We have certainly seen the TAFEs step up significantly. There’s’ for the first time ever a national agreed certificate four type course. But I would say on the whole, we’ve been deficient as a country. So, while the market has actually matured, as I said, in terms of capital investment we’re yet to see a proper matching of training to needs by organisations. But I might end it there, John, just in the interest of moving on to your next question.
John Lawrance:
Yeah, look, thank you for that, Alastair. I think there is some good things I would like to come back to. I personally think there is a huge opportunity for us as a country to upskill people in these high paying technical jobs. But as you say, it does not matter whether it is 5,000 or 25,000, there is still a lot of demand.
Alastair MacGibbon:
One thing I would say, when I was eSafety Commissioner, I used to go to a lot of schools and talk to parents and kids and I used to say to parents, if you want your kid to go into a white collar, reasonably safe, well paid job, where you're guaranteed work for life, get them into cyber security.
John Lawrance:
Yes.
Alastair MacGibbon:
And I would still say that to any parent out there or any person looking to re-tread themselves to get in there. It is not too late for anyone, Jane, to reskill in some of those spots as well, I'll talk to you later.
John Lawrance:
You got a potential candidate for the Box Hill Institute course certificate four. Welcome Lyria. Professor Lyria Bennett Moses, can laws and regulations help develop a robust and flexible cyber security culture and mindset at an individual and organisational level? Also, what do you see as the legal and ethical challenges confronting Australia in the 21st century, with respect to cyber security, given rapid technological changes?
Lyria Bennett Moses:
Very big questions there. So, I might give an overview answer now, and we can come back to some of the detail in later questions. Laws can change the incentives, both for individuals and organisations. A really good example outside the cyber context, but one very familiar to Australians is car safety. Cars must come with seat belts, people not wearing seat belts are punished. And it’s not just laws, it is also systems. Most cars now come with annoying beeps that drive you crazy until you buckle your seatbelt.
So, we can think about cyber safety with similar kinds of incentives. If you are thinking about organisations, cyber security is ultimately a cost. So, how can laws provide incentives to encourage organisations to increase the money they spend on getting good cyber security? Part of it is already there. We have data breach notification laws that drives awareness of when there has been breaches, creating a negative market impact for a breach that then incentivises companies that have a good reason for avoiding a breach, because they don't want to have to go through that.
One can go further, if one wants, you can think about mandatory compliance with particular measures, which is not very flexible, and I'll come back to the flexibility point in a minute, mandatory compliance with government directions, and we see some of that in the context of critical infrastructure laws. We have seen in the cyber security policy, the idea of a voluntary code of practice, in the context of Internet of Things. But we can also think about what laws, or what kinds of legal frameworks can we put around a voluntary code of practice to encourage compliance with it?
So, you might think, for example, of setting consumer standards around fitness for purpose so that compliance might be then deemed fit for purpose along this axis, for example. So, you can think about how to incentivise compliance with even a voluntary code. You can think about things like, what happens when breaches occur, is there a private right of action for breach? A fixed statutory payment that might need to be made to everyone affected by a breach or directors’ duties so that they are personally liable for harm caused in certain circumstances, where they haven't taken charge of cyber security.
Now, all these kinds of things may or may not be good policies specifically and that’s why I said, we can come back to the sort of specifics of it. But the point, I suppose I'm trying to make is that you can do various things that might incentivise cyber security, albeit, each of those potentially are coming with other kinds of costs that need to be weighed up, because cyber security might not be the only government policy objective when it's setting these kinds of policies.
An example of that might be we could have really high cyber security if we created a massive government internet filter, with white lists, and the government checked the security of every internet site that Australians are allowed to access. You would have this tiny internet with minimal access, but it would be very safe. I think most Australians would say that is not worth the trade-off. So, not everything that helps is necessarily a good policy.
The other point, very quickly, is it is not just about incentives for organisations, you can think about incentives that will help individuals as well. We have already had some discussion about training, but we can make things much easier for consumers, as well as looking at broad education. For example, in the context of the cyber security strategy, the government has a series of questions consumers might ask before buying a device to connect to the internet. Things like, can you change the password, are there regular software updates and so forth.
That's quite a hard mechanism, consumers going into their local electronics store, asking someone that might be a relatively junior member of staff a series of security questions that they might not know the answer to or might get wrong, can we think about ways to make that kind of decision-making easier for consumers? Can we use labelling in the same way that we do labelling for food to have clearer signals that help consumers understand when they're looking at two products, why they might prefer one for security reasons, over another using something that's a bit more visual and easy and on the packaging, and doesn't involve complex conversations with store staff?
Can we think about systems, can we think about, for example, that the installation instructions on some of these things will make you change the password as part of the installation process, rather than it being a complicated thing that only some consumers even think of doing. So, there are all sorts of things we can do, and we can create laws around to help encourage organisations and individuals to do better or to create better cyber security decisions.
You also asked about flexibility, and I have gone on for too long, so I'll try to be quicker on this.
John Lawrance:
It is all right.
Lyria Bennett Moses:
Some kinds of legal systems are more flexible than others. If you say everyone must do X, that is not very flexible. Cyber security changes all the time, and X may no longer be the best response to the threat. So you’ve got to think as well about designing laws that avoid that kind of obsolescence problem, and you can do that in different ways. You can make things outcome focused. There's very high penalties when there's a breach, for example, so everyone's trying to avoid the same outcome, rather than having to tick a bunch of boxes, that may or may not be the best way of achieving that outcome.
You can use principles-based regulation, which makes it harder to monitor compliance, but provides more flexibility for how organisations work within those principles. You can create rules that are easier to update. So, industry standards, for example, rather than embedding roles directly in legislation that is harder to fix.
You've also got to think not just about what should the legal framework be to create the right incentives, but how do we design it so that it's well adapted over time, and encourages good practices over time, and not just what we might think might be a good idea today?
John Lawrance:
I think that is a very good point, because as Jane touched on earlier, these risks are only growing, potentially exponentially as well as the opportunities. We wish to come back to that obsolescence problem, and maybe the concept around principles and that, because I think there is a lot there.
Ed, your question is, the CEO of Apple, Tim Cook has faced intense scrutiny in recent years regarding the perceived trade-off between privacy and security. There are quite a few quotes on this, and there is that famous case about the FBI trying to access the iPhone to solve a murder. The quote that I have here from Tim Cook is: "We at Apple reject the idea that our customers should have to make trade-offs between privacy and security. We can and we must provide both in equal measure. We believe that people have a fundamental right to privacy." Do you agree with this? And if so, what does this mean for us as individuals and for these government regulations?
Edmund Leong:
Thanks, John. First, let me welcome. Good afternoon everyone and thanks, it is great to be here amongst distinguished panellists. Look, I will get to whether I agree with this or not right at the end of my response. I think it is interesting to consider this dilemma between privacy and security and the reality, I think, it is not working as well as we would like. It has been hard to legislate away this problem. For example, Australia's access and assistance law from 2018, sometimes referred to as our encryption master laws were met with quite fierce resistance when it was introduced.
However, since then, through a number of amendments, as recently as last month, it has been largely watered down. The legislative mechanisms never really envisaged a physical compartment, or a storage area that could not be entered without any legal oversight. Applying the digital equivalent of this paradigm, I think will always be problematic, because by definition, cryptography, necessary for security and privacy, by definition creates a space that makes it computationally infeasible for third parties to enter. It is also impractical to criminalise encryption as well. People will always find a way to meet their privacy and security needs.
To your specific context behind Tim Cook's comments. You are so quite right, John, that was during the time of the San Bernardino terrorist mass shooting back in 2015, when Apple resisted these court orders to unlock seized phones of these perpetrators. In that case, the US government eventually dropped the action when they managed to get a third party to get into those phones. But the context of those sentiments, I think, was started back in 2013, where if you were to recall that there was a notable turning point between the relation between US government and big tech, with Edward Snowden's disclosure of the NSA's long running mass surveillance program known as PRISM, and really turned the public opinions against big brother.
But if we look at choosing between privacy and security, I think we should look deeper for more nuanced understanding to uncover that they are actually not quite different, and I'll tell you how. If we consider both governments and individuals having a similar goal of preventing unauthorized people from having access to data that they cannot protect so If I drill into that, preventing unauthorised, that's privacy, having access to, that's security.
If I refer to an article by security journalist, Daniel Meester from 2018, the word that we use in that industry, ‘security’, actually comes from two Latin words, ‘se’, which is without, and ‘cura’, worry, without worry. I think without worry is the most succinct description of the goal that our industry has… that I have ever heard that applies equally to both privacy and security. It allows us to reduce the discussion to basically first principles. For example, there are consumers, businesses, and governments, they all have data that we care about, and they want to control how it is collected, used, and protected.
Despite opposing technology, and divergent values, they all share the same goal, which is protecting our data. We want no worry-free digital experience without feeling that big brother's watching, and without the feel that our data’s being used against us during a compromise. Both privacy and security are important in our connected world, and as a consumer, we should have both. But has anybody asked how much of each?
That is why I think we should look at that alternatively, as a spectrum or a continuum, with privacy on one end and security on the other. I think in reality, it will always be a trade-off, it will always depend on letting consumers make some simple informed decisions on what protection is necessary at the cost of privacy, and potentially what privacy is necessary at the cost of protection. Back to the comments. Yes, I agree that you should have privacy and security in equal measure, I do. But that only represents the mid-point of this continuum and ignores all the other points that allows us consumers, businesses, and governments to participate in this digital economy without worry.
John Lawrance:
Thank you, Ed, that is a good answer. I think you have touched on a key point here, which is around risk. I too see it as a trade-off. I think, even the Singapore government have had their data hacked, and I do not think anything is totally secure. I think someone earlier mentioned or touched on this reminds me of the Tylenol scare years ago, and someone was questioning or challenging one of the executives from the pharma companies to why they couldn't stop people from accessing these things who have ill intent. They said, well, you can't make it tamper proof. You cannot stop people accessing it because that defeats the purpose of selling the product. That is when they introduced the tamper evident concept.
I guess in terms of risk, let’s continue with that. Jane, there's a question here from Ronald, which is about how to increase citizen awareness and privacy in cyber security, it helps to have stricter privacy legislation and putting accountability for data protection with those who store, process and access our PII. Which I had to Google which is the Personally Identifiable Information.
It goes on, making CEOs accountable and liable to proper information management, even when outsourced. Often companies think that identification or authentication mean making copies of IDs. What are you seeing here? Because you have got the wonderful position of being in government and leading government organisations. You are now working for top 100 companies.
Jane Halton:
Before we do that, Annabelle asked a question about where the $29 billion number came from. If you look at page 10, Annabelle of the cyber security policy, that is where you will find that $29 billion figure. I just had the time to look it up to answer the question.
Anyway, that is the answer. So, look, I think some of the answers we have already had got to how it is you got to balance up here, the risk and reward equation? I think it has been quite nicely drawn out where the extremes of the situation are. You know, you can be completely safe and access absolutely nothing, which is not in anyone's interest. Or you can have a dearth of these rules in a world where anything goes and none of us are going to be happy with that.
I do think, personally identifying information, I don’t personally think that we have had the full extent of the discussion about that yet. I think people don’t fully understand identity theft, what is going on in the dark web, et cetera, and why it is you should be very careful about some of those things. Now, the more we have incidents around these matters, the more I think individuals will worry about this. But certainly, when it comes to the big corporates, there is quite a high level of awareness actually, and the whole issue around breaches. So, if you look at some of the highly regulated industries, there are now reporting standards under those regulatory arrangements. And I am thinking here, in the banking context particularly, if you have any sort of a privacy breach, you are required to report under the [CPS] 234, which is the relevant item in that standard.
I think what we are seeing here is a level of escalation around what it does take to actually get people in the private sector to pay attention to this. Now, I can tell you, from a banking perspective, that there is a very, very, very clear understanding that the notion of actually disclosing people's personal information is something which from a reputation perspective, let alone whether or not people would have recourse is really, really important.
So, whilst there is a legislative or regulatory regime that requires things to be reported, and that has got a pretty low threshold in terms of what actually needs to be reported in. The thing I can tell you from a governance perspective is that at a board level, and I chair the Digital Business and Transformation Committee at ANZ. At a board level, the work that's been doing in my committee, is something that we then take to the whole board, and that there is such a level of concern about these matters, and we have a high level of hygiene in relation to those issues.
Recognising and again, we've heard this already, that there are ever changing challenges, which is why the more outcomes or principle-based approach, I think, is good. So, I don't I see at the moment, a situation where you would need to get, the mallet out to crack the nut in terms of… particularly from the big corporates, the understanding that this is something which is fundamental to the trust that people place in them when they do their business with them. But I would absolutely agree that this is the sort of thing that needs to be kept under constant review.
I think actually, the bigger challenge for many will be in that smaller to medium sized business, where this has just not been something that has occurred to people. And even the fact that you're storing, for example, people's identifying details, name, age, date of birth, specifically place of birth may be something that's relevant, that those are the sorts of data points that people who do want to create false identities, those actually have a value to others and therefore, you do have a duty of carrying a responsibility to play that quite carefully.
Now, I think the thing about the strategy is that it should encourage, and of course, we must think about what the communication mechanisms are here, so people become aware of it. So, it should encourage people to start thinking about those things. In some sectors, they already do, thinking, particularly here in relation to health care, where I think people have a very high level of understanding of the importance of keeping private information private. But I think we would all acknowledge that there are other bits of the economy whereas yet that understanding hasn’t penetrated.
Alastair talked about some of the work that they are doing in terms of increasing awareness. A company that I am associated with is Vault systems, where again, it’s a cloud, but it's a sovereign based cloud, and we're seeing increasing numbers of people approaching us in relation to the need to ensure that information they hold is actually held securely. And it's held securely, domestically, as opposed to being stored in India, where we know that our laws don't reach, and we've seen even some quite sophisticated government players get caught by breaches that occur offshore, which they then do not have any legislative recourse or reach into those jurisdictions.
The short answer... sorry, I have gone on too long. The short answer of the question is, I don't see at the moment, the need to significantly up the legislative framework in respect of penalties, because I think my preference always is to start with a regime that has regulatory elements to it, reporting elements to it, and then the education and improved approaches to compliance. Now, ultimately, you can go there if you need to, but my preference would be to go there on the basis of evidence of need.
John Lawrance:
Thank you, Jane. Alastair, look, this touches on a comment or email at the start which is I thought is quite encouraging. I actually joined AVCAL[2] many years ago because I have this view that as Australians, we can do whatever we set our minds to, but we tend not to be able to commercialise a lot of our good ideas, which is a lost opportunity. But you are saying there has been a big shift since 2016 to 2020.
Is this shift occurring at all levels with the SMEs as well? We're also, as we speak, we ain’t [sic] seen nothing yet as far as I'm concerned with technology, we've already got AI, email and quantum computers. The University of New South Wales, in fact, had a quantum computer I was reading about a year ago.
All of this goes back to Lyria’s point about having a principles-based approach. That could be the catalyst for promoting that cyber awareness, I suppose within the SME sector and also the skills.
Alastair MacGibbon:
Thanks for reaching out to me, I was going to actually jump in on the back of Jane's comments. Jane sits on the board of a big bank, and there is no doubt the banks in my view have... I have been in this space for quite a while. The banks have always been at the forefront of this. I would say that they are at the forefront of the confidentiality part of cyber security.
Cybersecurity is confidentiality, integrity and availability of systems and data. One of the things I would say on the whole, if you were going to give Australia a mark, and you'd probably give the same mark for pretty much every country, if it was a report card, you probably give us a C minus or D for how we went with protecting personal data. That is not the banks, by the way, that’s just as an economy. The same data might reside in the bank as it resides in some easier target, and that target gets knocked over. My identity is my identity. If it has gone it can be misused.
We have not done well for confidentiality. My biggest fear is we have not done very much at all about integrity of information, which brings us to that AI, Internet of Things world, real-time decisioning, without a human in the loop type stuff, which does worry me. Then availability worries me an awful lot, particularly if you are talking about critical infrastructure.
So, it’s a roundabout way of saying, look there is no doubt that this economy has been divided into those who have cared and invested and those who have not. Unfortunately, the list of those who have cared and have invested is very, very small. I have been talking to those boards on both sides of that for years. I do think there are more and more moving over... I mean just general corporations act duties of directors and officers where more and more risk is pushed onto them has meant that they have started to take cyber security risk more seriously.
Again, the bigger end of town that is spreading down that list more, are starting to understand at least what the risks are. Mitigations, and detecting them and remediating them is another thing, but you have got to at least have an awareness to start. I think from that top end, it is going well. Just after the government launched the 2020 Cybersecurity Strategy, I put a discussion paper out on critical infrastructure, it was a week later, that talked about raising the number of industries caught up and designated as critical infrastructure from 5 to 12.
It goes from electricity and water, and gas and ports, and telecommunications, four of which occurred under the SOCI Act in one under the telecommunications sector security reform, which is a Telecommunications Act amendment. Those 5 into 12 sectors, which will grab those five and push them into things like banking and finance will be in there for the first time. Transportation which will bring obviously land, transport, air, and the ports are already captured. Energy, which captures the gas and electricity and other such things.
Why am I saying all this? Because it's, again, a maturation and understanding the bits of the economy that connect with each other, that make up this critical mass of critical infrastructure where integrity and availability are just as important, if not more so, frankly, than the confidentiality side. I do think, for those of you... Jane's already mentioned page 10 of the strategy. I will direct people to paragraph 36 which talks in general terms... I call it a market signal from the government to say that it's increasingly minded to look at changing things like Privacy Act, the Consumer Law, data laws, and I think more importantly, in the second dot point, a very oblique reference to the Corporations Act to increase obligations for owners of systems of national... Not just of national significance, of any type of computer system that provides a service.
That will drive more obligations upon directors and officers probably already caught under the Corporations Act, but I think to specifically call it out, like CPS 234 does for the financial sector, run by APRA is an important signal to the market to raise its game. Long answer to a very simple question.
Short answer is, in a supply chain world and in cyber security, everything is linked to everything. So, I can have my house in order theoretically, not that you ever can have it completely in order. But if all of my supply chain who tend to be smaller and weaker and not as understanding can be attacked, or degraded, or have stuff stolen from them, then my defences are completely worthless.
Unlike a lot of other parts of security and national security, in cyber security, there can be no person left behind. And we have not yet grasped how to do that. I do think the market signal to the much larger part of that Corporations Act is a good way to start getting there. You've got to crab walk your way into this anyway, you can't suddenly go from being a monarchic, self-governed place, Raftery’s Rules, as Jane referred to it, to a suddenly pristine, well-governed, cyber security being on everyone's lips world, which might not be the world we want to live in anyway.
But where is that natural place for us to sit as an economy, accepting the fact that there is always going to be risk with these connected technologies? Small businesses, households, tough to defend yourself against a nation state when we see those same nation states being able to knock over well- defended, or at least reasonably well-defended systems. No one can really protect themselves against a determined offender. Anyway, that is a stream of consciousness, which means I'm just going to go on mute.
John Lawrance:
Thank you, Alastair. I think that is a very good point about the nation state. How can the local hairdresser, or fast-food outlet defend itself against a sovereign nation? It is just... But as you say, there are ways in, and there are some things that I have read about too, which I find quite startling.
But I would like to come to you in a second, Ed, and we can sort of start, at the risk of being too controversial, there was a question here about foreign digital interference. But before we get to that, Lyria, there's a question from Lola, and I'll quote this, I think you hopefully can see it, but “whose responsibility, is it to educate us all about the dark web? Should this be a personal responsibility, and should companies be addressing this more seriously? Given how tech savvy our kids are, does education start as early as school level?” I guess this goes to the whole education and the remit and mindsets and upskilling of the next generation.
Lyria Bennett Moses:
Okay, I will have a little go at that. But there might be others on the panel who can say a lot more. I think we have got to think about education, if you like in different phases, or different timings. I do not think it's true to say that because kids are tech savvy, they're necessarily cyber security savvy. I think there is a big difference between those two things. There are programs at schools now, that are helping with cyber education and cyber security and getting kids to think about not telling people their passwords and so forth.
That is starting to happen. But it is not automatic, just because they’re on their iPads all the time, right? It has to actually be done. You can look at that through the school system, absolutely, and then take that forward. At the moment, we have also had conversations about people who are going to become the cyber professionals, and how you educate them.
But I kind of want to think about even tertiary education or TAFE education in a different way too. What do people who are not going to be cyber professionals, but are going to be professionals who will have control over data, or will have to think about that, what do they learn about cyber security? To give an example from my own field, what do we want every lawyer to know about protecting their client’s data? Because I think cyber security is particularly important given the sensitivity of data that lawyers hold. So how do they… What sort of education do we need to make sure they have?
I could make the same point for people who are going to become health administrators, or go into a whole variety of other fields, because you want that to go beyond just basic high school education, but it's not necessarily being built into university programs outside of these sort of disciplinary boundaries. You do not take typically a cyber security course in a law school; you have to already be in an engineering or computer science school and so forth. How do we think about that kind of education being much broader, even in that kind of context?
But in terms of educating everyone idea, I think that's... To the extent we can do it with simple things like what makes for a good password or keeping your software up to date, I think that's great. But I do think we need to think about the rest of the legal system to help make it easier for consumers. As I said, things like labelling of mechanisms that consumers can use with education to help them work out which product to buy, for example.
It's more complicated than just a set of basic education campaigns, the different pieces if you like, have to fit together, and have to work out that everyone has the education they need for the role they're playing. Whether that is just the role everyone plays as being on the system, or the very particular risks associated with particular industries.
John Lawrance:
Just to pick up on a related point, there is an HBR (Harvard Business Review) article that I was reading, and it talks about phishing, and boosting your resistance to phishing attacks and talking about type one and type two thinking. But in this context, the quote is, "If you teach employees to protect their information at home, they'll take those lessons back to the office and apply them to company information." I guess that just highlights the importance of that partnership between industry, government, and individuals.
Ed, “should a cyber security strategy”... This is a question from Peter, and then there is another question here from Sharia Actor which we'll come back to. “Should a cyber security strategy include protecting general elections from foreign digital interference?” You might want to jump in here too, Alastair, given your past experience.
Edmund Leong:
Yes, thanks, John. Yes, it does, and I think it has. If you consider Alastair's remarks a bit earlier, that if we look at the apparatus, or the systems that we use to conduct our general elections, if we consider that as critical infrastructure, because it serves a very important function in protecting the cornerstone of that liberal democratic society, which is trust, trust in the process and trust in our political or press system. By classifying that and adopting some of the, I guess, the mechanisms in the critical infrastructure discussion paper, for example. If you are classified as a regulated entity, or a system of national significance for example, there are additional controls that would afford that level of protection to this apparatus.
I think it's important for our agencies, for example, like the Australian Electoral Commission, to have a look at their end-to-end posture of the componentry and set of the apparatus that they use to conduct these elections and conducting assessments on how much risk they're willing to tolerate to interference. I guess, interference on the integrity and the availability of the of data used to produce the result.
John Lawrance:
Yes, and... Sorry, Alastair.
Alastair MacGibbon:
I think the most important piece of critical infrastructure we have in Australia is our Parliaments. People on the seminar may not agree. We might disagree politically in how politicians behave. But the most important thing Australia has is its democracy. Everything else stems from it. If you are a public servant, the people you answer to are the political class, and they're the ones that then set the laws that we all operate under.
And so, we need to look at electoral systems as critical infrastructure. Interestingly, of course, if we think about how cyber can influence those elections, we go back to the late, late 2016, during the US, during the last US presidential elections, where we see Russia actively involved in stirring dissent in the US. Now, many would argue you do not need a foreign government to help do that. But in 2016, it certainly did, and the Russians have not denied their involvement in this.
That was a wake-up call for democracies all around the world that other nations will use… will manipulate information, use micro targeting. Remember, you don't have to influence everyone in the country, you just might need to influence a handful of people, and a handful of electorates to change the outcome of a vote, which to me is potentially catastrophic.
Now, is that a cyber security question? Is that a question for how information flows? It’s really tough if you were in my last role to sit there working inside of an intelligence agency. They are saying, here, we’ve got to protect democracy, most Australians would think that is a bit of an anathema. So, is it the police, is it the electoral commissioner who then starts commenting on what's right and wrong in terms of messaging, and has it already had an effect… has it already had an effect by the time you take that message down and had it stop being propagated through systems.
The social media companies have been particularly poor over the years. They've only just now been dragged into having some responsibility of understanding the role they play right through to cyber security, real hardcore cyber security issues of can I trust the vote counting, or can I trust how the outcome of the vote counting is being portrayed to the public?
That might be very true, cyber security issues right out to the softer edges. But to me, cyber security and elections should go hand in glove, and we are going to need to ask ourselves some really tough questions on who has that responsibility and authority in a liberal Western democracy. But I will tell you what the worst outcome is, is not having that discussion, and actually biting the bullet on it. The worst outcome is another nation determining the outcome of our elections. I have always said that’s Australia’s job to screw up that thing, not some other countries.
John Lawrance:
Great, and Lyria, just before you jump in there, it is an interesting point, Alastair. There is another question here from William, which resonates with me. We are going off topic a little bit because it is the economic, and there's the social and from the private to the government. But William's question is, “what cyber capability should Australia be developing in the context of a cyber war?”
We could argue that there has been a cyber war going since 2010 with Stuxnet arguably. It's interesting too, the US and how they do that forward stance, I'm trying to think of the terminology, but they've got a very clear view on how all this should be dealt with, hunt forward, that was the terminology. But going back to the elections, Lyria.
Lyria Bennett Moses:
Yeah, I wanted to make two points about security and elections. The first one is just the software that we are already using in elections, particularly, for things like vote counting most jurisdictions in Australia use proprietary software. Researchers have already demonstrated bugs in the software that is being used, but cannot do proper testing, because they cannot get access to the source code.
I think we really need to think quite deeply and agreeing with Alastair here about how we want that to work and how we can ensure that the software being used in the electoral process is secure. But I also wanted to come back to the point about how voters get manipulated. This comes back to what we were talking about earlier with the link between privacy and security. The way that people get manipulated is often because of information about them. Whether it's psychological information or where they live, what electorate they're in, but it's the fact that other actors have access to that kind of information that helps them target messaging in ways that ultimately manipulates voters, and often in non-public ways that others can then access and critique.
I think that... I kind of want to link that back to not just a security issue, about how we manage that, but going back to the question of how we deal with our privacy laws in particular, to restrict the... To make it harder, I suppose for malicious actors to get information about Australian voters to use that to manipulate us. Because we do... Sorry.
Alastair MacGibbon:
All the social media companies, Cambridge Analytica, of course, freely used information from Facebook, which we freely gave, which included information that was surreptitiously gathered, aggregated, sold to
third parties, potentially nation states as well to use against the very economies that grew Facebook in the first place.
It shows that total lack of responsibility that the large tech players have had over the years to the detriment of the very economies that spawned them.
Lyria Bennett Moses:
Which is where I disagree, I suppose with the point that Jane made earlier about consequences for bad practices and not complying with things like privacy laws. I certainly think that when we're looking at the behaviour of companies like Facebook, in light of what came out of the Cambridge Analytica scandal, that we need to have sufficiently high penalties to incentivise them not to act in ways or not to be careless as to how their data is being used and who has it -
Jane Halton:
Well, let us be clear, I do not want my position misrepresented, which I think it just was actually. What I said was, do not take a hammer to crack a nut. If you understand that behaviour is not as it should be and does not meet a community standard, then by all means, go with the regulatory approach. The point I was making is that if you impose a very tough set of penalties without good reason, I do not think that that encourages the right kind of behaviour. I do not want my position misrepresented, because I think you just did. But I think it is really important that we have appropriate penalties in a context where harms are quite significant.
To the Cambridge Analytica point, the Cambridge Analytica point is a really important point, because that was actually legal. It goes to the naivety of some of the people who actually thought that if we just share everything, life will be wonderful. So, they did not understand why this was an issue and even now, I would argue some of them do not understand that.
Now, that's a different thing, because we can actually start to have a view about what the harms are that come from this, and if people will not take appropriate steps, and this is my point about what's going on in the banking context, I mean there are obligations and reporting and there can be an ever increasing amount of penalties if they are needed. But providing people to recognise harms, and take appropriate steps, I do not think you take the hammer to crack the nut. But if people are not doing the right thing, different matter entirely, particularly when there's significant harms to individuals.
John Lawrance:
If I could... Sorry.
Lyria Bennett Moses:
I just wanted to say, I had not meant to misrepresent your point, Jane. Apologies if I did that, that was not intentional. But I agree with what you just said, I think that there is no reason to have penalties that aren’t needed. I suppose my point is simply that in the context of the behaviour of organisations like Facebook, we need two things. We need to have penalties for breaches that are significant enough to incentivise good behaviour, and we also need to look at the actual laws themselves.
So, I agree with you that not everything that is happening is currently a breach of the law, because we rely on things like consent, for example. Then people are asked to read way too many privacy policies. I think somebody even put in a privacy policy once for a joke that you agreed to hand over your firstborn child, and everyone just clicked, I agree. Obviously, not reading the policy first.
So, the point is, is that we have privacy... That we not only need to think about the penalties, we need to realign the rules to ensure that it's not simply a question of well, as long as someone ticks an ‘I agree’ box, it's essentially virtually a free for all.
Jane Halton:
I think that is a really important point, because... I think that the ACCC did something on this. They made the point rightly that you bombard people with pages and pages and pages of stuff. Nobody ever looks at it. Now, that is not actually getting effective consent here. The fact you tick something, which on any reasonable analysis you couldn’t have been expected to understand, I think is a fair enough issue.
So, I mean this is very complicated. We are out here talking about cyber security, and, of course, what we have done is we've gone into privacy, and we've gone into critical infrastructure. That, I think it's a good demonstration of the point that was being drawn out at the beginning of this, that all of this stuff is related, and you need a regulatory framework, you need a level of community understanding, you need a level of business engagement, and you need to constantly upskill all of those things, if you're going to actually respond to what are escalating threats.
John Lawrance:
Yeah, and to me listening to all this, and again, I have been in trouble for using analogies too much. But it is a bit like the shift in behaviours and attitudes to drink-driving back in the day. There was the overt introduction of penalties in terms of drink-driving laws and RBTs and everything. But I still remember very clearly as a kid having those advertisements with the young father in the jail cell and his head in his hands. Now, getting across the message that actually drink driving is not the machismo thing.
There was a clear case, I think of tightening the behaviour so that people will start to think about something differently. In terms of the skills and the training, there is a question there, Alastair from Sharia. Did you want to respond to that, about the Internet of Things, products education?
Alastair MacGibbon:
I think the question, let me just read it, the question related to, “are there standards for consumers in relation to cyber security?” Just last week, the federal government released a code for Internet of Things connected devices, which I think is a really good first step. We should think about this in terms of... Look, for years, people have argued that we should move to things like an electricity consumption or a water consumption label on the front of a device. Like you do if you went to a big box retailer today and you are buying a washing machine, you would see the number of stars for energy and water consumption and that has driven a significant amount of consumer behaviour.
You would argue vehicle safety is another classic example, where one of the first questions people ask is, what is the... I think it is called the NCAP rating. Is it a five or is it a one? If it is a five, I will take my family, and if I think that is good, or is that one good, I'm not sure. All I know is I asked what is the safe car, because that is what I want to put my family into. Could we do the same thing with internet connected devices? The short answer is, yes, and no, at exactly the same time. Because what is secure today, like a vehicle thing, where you say, yeah, we do crash test, the crash test is the same forever, and the technology inside the vehicle isn’t going to change.
The technology inside this device here changes the second it gets re-flashed by the phone manufacturer. The technology inside this thing that I'm looking at you guys on will upgrade and update, and unfortunately, there's a whole range of really nasty people on the other side, who are busy trying to break it all the time. What is safe today may not be safe tomorrow, in fact, may not be safe today, because we just have not found the vulnerability, unlike a crash test, or a water consumption or electricity.
But if we go to a principles-based thing where we say, is this device, is it shipped with need to change the password, as soon as it comes out of the box? Is the manufacturer committed to updating the software in order to patch for vulnerabilities that become known? On and on it goes. Then the answer is yes, standards and that consumer messaging should help, because if you had a choice of a device manufacturer said, this is our code of conduct to try to keep you safe and secure, versus one that doesn't, I would consume an item or carry out an activity with a firm that warranted that it had subscribed to that code of conduct.
So, I think there is a way for us to drive consumer behaviour. But the vast bulk of this, and sorry for the long answer which is very non-traditional for me, for anyone that knows me, I say sarcastically as possible, is that most of this is going to be driven from the top down to what consumers… to what services get given to consumers via firms.
But I do like the bottom up approach, I just would not necessarily see it as the thing that is going to solve the problem. Because the last thing I'd say is this, as someone that's spent 20-something years trying to talk to people about safety and security online, it's a tough row to hoe, because of the complexity of it. And I often get told, why don't we do a slip, slop, slap campaign when it comes to cyber safety, and/or cyber security since they are so intimately linked? The answer is because it is so much more complex. I know as an aging, white Australian that I need to make sure that I wear a hat and sunscreen, don't always do it, and I know the ways that I can reduce the chance of skin cancer, but it's much harder to protect myself or to be more safe online than those things.
We can give them some basic tips that will be good if we can start draining the swamp that way, but it is not likely to solve the problem. It has got to be designed out by corporates and governments, I think is the best and quickest way for us to be protecting the public.
John Lawrance:
Yep. Good response.
A question from Michael here for you Ed. Are there any current or future trends the panel can share in regards to the cyber security threat environment in taking both public and private sectors?
Edmund Leong:
Yeah, the threat environment as Jane opened with, its worsening, and it is no surprise because it's also been outlined in the cyber strategy where they are. They talked about the general decline in the conditions with the insurance and the activities from criminal gangs, nation states. So, having this context, I think it is important to note that data is essentially one of the things that these threat actors will always want to exploit.
If you know that the privacy of individuals, PII data is weak, they will always attack that as the easiest way, because it's more economical for these threat actors to conduct their operation. Because you are exploiting the vast volumes of data that's available, technologies such as machine learning and artificial intelligence will also be used by… in defence as well as offense. What I expect to see soon is machine assisted offense as well as machine assisted defence where you essentially use both technologies as a force multiplier.
For us, we like to consider that as a force multiplier for defensive purposes, but there is no reason why our adversaries will not also use that for offensive purposes. Also, an important trend is the development of IoT and industrial IoT security as we bring more into the public awareness, the security of critical infrastructure. That ranges from, as Alastair said, the 12 topics of traditional industrial energy, mining, gas, utilities, but also other things that are not immediately obvious, such as cloud, electrical systems, and also the defence industry.
There are several things that are enablers for those patterns and trends. That is essentially data, machine learning, artificial intelligence, and industrial IoT.
John Lawrance:
That is probably a good segue back to you, Jane, and potentially you as well, Alastair, there's a question here, how do you see... From Adrian, “how do you see AI impacting the future of work in cyber security, and are there any particular skills you think younger generations who are keen to get into industry should pursue over the coming years.” But also tack on to that, the comment I think earlier from Lyria about obsolescence problem, and your comment too around the principles. Maybe we can bring that into the back end of that, in terms of-
Jane Halton:
The question about AI, and if you think about what people who wish to disrupt everything that we do will attempt to do, AI is both an enormous advantage, potentially, but also potentially an enormous weakness. I think the real challenge, we're going to have going forward, and if you think about it, we've been talking about the escalating risk and the escalating steps you need to take to make sure that people are protecting themselves, and are being forward leaning on this, as opposed to waiting till there has been a terrible disaster, and then say oops, we left the door open.
But particularly with AI, the thing about machine learning, is, it depends on essentially what you ask it to learn. If people can get in and affect things around the way the machine learning operate. If you think at a macro level, and I'll give you a particular example, in relation to machine learning, for example, in the medical space, where... Or indeed, data analytics, where you use machine learning often to try and derive key lessons or key insights, either, for example, in medicine in terms of diagnostics, and you might be using machine learning to do things like read x-rays, or pathology results, or things of that ilk.
Now, if you can get in and tamper with how machine learning operates, you've actually got the potential to cause enormous harm. We need to understand that whilst the great gift and opportunity of machine learning is the capacity to process things very often at rapid speeds, huge volumes, and to get better at that in time, and we've already talked about elections, where you are looking to process things in rapid speed and at huge volume. Well, the kinds of disruption that comes from deferring the things like AI, I think, are very significant.
Now, when it comes to what we need by way of skills, I'm always reminded of a good friend of mine, who was a lecturer in computer science, who laughingly said to me one day: "Well, of course, I did my degree in the 50s, and there was absolutely nothing I learned... " He was lecturing: "And there's absolutely nothing I learned in my degree that is in any way germane at all to what I do today, other than the general principle that standards will continue to improve and you have to continue to stay in front of the game."
I think for young people, the kind of skills that we might ask them to go off and learn, be it, was it the Box Hill Certificate for that you were recommended for?
John Lawrance:
Well, that was the first one that got that search for international and national recognition. Yeah.
Jane Halton:
Yeah. But the bottom line is, whatever is your base level of education, you have to do something that is manifestly industry relevant, and you probably have to have a range of skills that are pretty evidently connected to this kind of world, including analytics, et cetera. But what we know is that you will have to continue to upgrade, because if you want to stay current... Hence the joke that I could go off and do the Box Hill Cert for, because all of us need to keep our skills really at the cutting edge, because otherwise, we no longer useful practitioners, we're observers of history. And if you're going to stay current in this world, you need genuinely cutting-edge practitioners.
But, the one other thing I would say about AI, of course, is that it is one thing for AI to either undertake tasks or produce analytics. The thing that we will never... Well, we could have discussion about this about robotics, but whether or not you can ever train AI for judgment, and whether you can train an AI for the policy thoughts and the market, for example, sensitive matters that you might regard as being matters of judgment, we don't yet know that.
But certainly, in terms of the risks that we run from AI enabled systems, they are, I would argue, probably magnified than probably some of the other areas.
John Lawrance:
Yeah, I totally agree. I think there's huge opportunity with AI and ML. We have talked about this before. There is even... The simple example I use is if someone in your family is sick, and you've got a machine that can do some brain surgery, and it has 99% efficacy rate, versus something that's fully explainable, and 70%, you probably go the 99%. But if you're denied credit, but even in this context, you might say, Alastair, that with the rate of change and the speed, if we can, say an election result that's managed through ML or AI or something of that nature, but we can unpack it or it's a black box, then, I guess that makes it more prone, more susceptible to foreign interference or just interference generally.
Jane Halton:
Can I just make one little observation about that? The Russians are a bit prone to poisoning people, as we know. Sometimes not successfully, but the truth of the matter is, that requires you to physically get an agent to somewhere near the intended recipient, and sometimes, unfortunately, giving it to non- intended recipients.
Well, imagine if you were using AI enabled medical software to perform some procedure on some high-risk target for a non-state actor or a state actor or whatever. Well, you would not even have to get anywhere near your intended victim to provide them with a substance, you would simply be able to manipulate an outcome using that kind of intervention.
I think you start to see that you can not only have a community wide effect, but you can also be very targeted, if you were minded being that way motivated.
John Lawrance:
Great. Alastair, did you want to add to that?
Alastair MacGibbon:
Well, apart from not wanting to be a Russian opposition leader at any time soon, or a dissident generally, to Jane's point with Novichok I agree.
But look, you are right. Look, I think we... I will rue the day thinking this or saying it, but I think we often overplay the role AI will play in the near term. We know most AI is more machine learning. To Jane's point, it's we say what we want you to learn, and we teach those things, and there's no doubt augmented machine capabilities, both in terms of cyber defence , where you can go through a large amount of data looking for anomalous activities or predicting the way something might go, I think is a good thing. There are no doubt offenders will be using that as well to look at how they can move into places.
If you are talking large volumes of stuff. I contemplate in a very people heavy business, which we are. I think about this almost daily, the phrase I used to the teams that I work with is we have got to disrupt our own business before someone else does it for us. Every time I look at any allegedly AI type of solution, I still fall on the person who is doing a better job. That does not mean that you cannot have this augmented process, but there is still a human somewhere in the loop for making a decision saying: "That's right. That's wrong." My judgment call to Jane's point is, I am going to go this way even though the machine is suggesting another, because I understand the consequences of these things.
Whether we can teach machines consequences, morality, the trolley car dilemma type of questions, then I suspect humans are still reasonably safe. But the last thing I would say is this, we do not have a great history of wondering about the consequences of our actions in bringing technology into these loops. It is only post fact that we sit there and think, whoa, was there a better way we could do this? There aren't that many ethicists sitting there saying, "Don't plug that machine in. Because it's actually not a good outcome for humanity." We tend to plug it in and then sit back and say, "That wasn't a very smart idea." But by then, it has been adopted by so many things that we have lost that human intervention bit.
I am not a Luddite at all, because I very much enjoy technology. But I do wonder whether we understand the consequences, sometimes of the pathway we're taking.
John Lawrance:
Agreed. That was a point that was made by Dr. Simon Longstaff on our last webinar, he said, we should start with the ethics in mind otherwise, we risk having another Monsanto scenario. I guess, putting in another context earlier is that I read somewhere that I think Google have driven over a million kilometres in Australia with their... You were talking about the trolley car; they tell us they have a million kilometres on their autonomous vehicles.
Again, there is an opportunity here, as I see it, to have a really thriving vibrant industry. What seems to be lacking is a legal framework. I guess, this brings in that question from Roger around how security strategy can be used to address stimulation and growth of Australia's National Cyber security research and development technology, innovation, development and commercialization piece quite a lot.
Lyria Bennett Moses:
That is a very big question. I am not sure I can answer of all of it. But I guess someone who is involved in research, just to flag a few things. First, there is already funding. You have mentioned the cyber security CRC. There are other things as well, where we have research in this area is funded through other research schemes and so forth. It is not starting from zero. It is really thinking, how can we make that research better?
One of the significant challenges I find in a university environment generally, but also specifically in the context of cyber security, is how do you incentivize interdisciplinary research? Because cyber security is ultimately not purely a technical problem, there are technical elements, and you need people to research on encryption and other very technical areas that can be done entirely within one discipline. But you also need to solve the bigger policy problems that require looking at, for example, the role of law and standards, the role of particular technical solutions, and so forth, and understanding a problem as a whole; the psychology, the law, the technical stuff, social science, and so forth, to really be able to do this effectively.
That is something that I tend to find University structures often work against, and funding structures often work against. I suppose my answer isn't a full answer to the question, but it's just I supposed to think more creatively about what we mean by cyber security research, and how we can bring different expertise together to make that happen more effectively.
John Lawrance:
Thank you, Lyria. Ed, there is a question there from Zoe. We have got a few minutes left, we should have time for this one. Zoe's asked, “many organisations are developing and or implementing a cyber security management framework. In your view, what are the most important elements of an effective cyber security management framework? And could you please share some examples of successful implementation and or pitfalls?”
Edmund Leong:
Good question, I think effective cyber security measurement in our experience starts from the top. You really need to have the board, the CEO invested in the outcomes and understanding what the risks are, if they don't get it right, and legislating against getting it wrong by proposed changes noted in the paragraph 36 of the Cyber Strategy where there are criminal penalties for directors and board members. Put it essentially, it is the industry on notice that you need to get this right, just in a similar way that health and safety is something that's not negotiable.
If we look at the framework, ISO 27001, the information security management system is a good starting point for understanding the information security as a system that contains physical security, cyber security, obviously, personnel security, but also I think it's been called out quite well in the cyber strategy is supply chain. It looks at the problem in four dimensions, personnel, physical information, and supply chain.
Other frameworks we've seen you use, especially in my experience in government and industry is also the COBIT, Control Objectives for Information and Related Technology, the overarching framework for putting cyber security, or information security in the context of the broader process universe of an organisation. Also, the ACSC essentially is also a good controls-based framework that it's reasonably easy and makes it more accessible for organisations to distil the highlight reel of the ACSC ISM.
Finally, there is something new on the horizon.
In January, earlier this year, the US NIST agency, National Institute of Science and Technology published the CMMC, the Capability Maturity framework for all US government agencies to comply with by 2025. That is a very useful cyber maturity framework that encompasses the four dimensions that I just spoke about. It's a five-point maturity scale, and I would anticipate in the next few years, we would see that being advocated and spoken more about in the defence of Australia context.
John Lawrance:
There is a whole conversation in that alone. So, thank you, Ed. Just in the interest of time, I would just wrap it up with any closing comments, going left to right on my screen here. Lyria, did you want to add anything before we call it a wrap?
Lyria Bennett Moses:
No, I am good. Thank you.
John Lawrance:
Alastair, any parting comments?
Alastair MacGibbon:
No, thanks you, John. Much appreciated.
John Lawrance:
Thank you. Jane.
Jane Halton:
Well, we might as well make it a consensus. No, I think we have had a good run around the paddock. Thank you.
John Lawrance:
No, thank you. Thank you all and look, that is amazing. On behalf of the University of New South Wales and the University of New South Wales more broadly and the Downer Group, I would like to thank all our speakers very sincerely for sharing their valuable time. I know you are probably going to rush off to board meetings as soon as we hang up. You are obviously out and about there, Jane. Your knowledge and expertise are very, very valuable. I personally feel that this is something that we as Australians can do very well. As you say, Alastair, if we can lift our performance from a C minus/D to a B or an A, that would be nice.
This session is also being recorded and will be shared via email with everyone who registered for the event. We look forward to seeing you all at future University of New South Wales events. Thank you.
[1] Cyber Security Cooperative Research Centre.
[2] Australian Venture Capital Association Limited.