Does splitting a potentially monolithic application into several smaller ones help prevent bugs?Problem with understanding “seam” wordApplications Architecture - fewer big systems vs more smaller systemsMicro vs Monolithic Server architectureSeparating Code into Smaller Files in CDesign, how to utilize The Hardware (multiple threads and/or GPU) while indexing (via a database) a very large set of binary filesWhich approach should I use to split a monolithic application into several microservices?Is there a standard for documenting a program's high-level architecture?Relative merits of monolithic repository over multiple smaller onesmany sub application or a big oneWhere to store data for Microservices Architecture?Splitting application into multiple but keeping database same
Single word request: Harming the benefactor
Why does the negative sign arise in this thermodynamic relation?
Offered promotion but I'm leaving. Should I tell?
Unreachable code, but reachable with exception
Built-In Shelves/Bookcases - IKEA vs Built
Do I really need to have a scientific explanation for my premise?
What Happens when Passenger Refuses to Fly Boeing 737 Max?
How to pass a string to a command that expects a file?
How to create a hard link to an inode (ext4)?
Why does Deadpool say "You're welcome, Canada," after shooting Ryan Reynolds in the end credits?
Latest web browser compatible with Windows 98
Examples of a statistic that is not independent of sample's distribution?
Does splitting a potentially monolithic application into several smaller ones help prevent bugs?
Upside Down Word Puzzle
Should I take out a loan for a friend to invest on my behalf?
Word for a person who has no opinion about whether god exists
Why would a jet engine that runs at temps excess of 2000°C burn when it crashes?
Peter's Strange Word
Make a transparent 448*448 image
How are such low op-amp input currents possible?
Finding algorithms of QGIS commands?
What does a stand alone "T" index value do?
Are babies of evil humanoid species inherently evil?
How much stiffer are 23c tires over 28c?
Does splitting a potentially monolithic application into several smaller ones help prevent bugs?
Problem with understanding “seam” wordApplications Architecture - fewer big systems vs more smaller systemsMicro vs Monolithic Server architectureSeparating Code into Smaller Files in CDesign, how to utilize The Hardware (multiple threads and/or GPU) while indexing (via a database) a very large set of binary filesWhich approach should I use to split a monolithic application into several microservices?Is there a standard for documenting a program's high-level architecture?Relative merits of monolithic repository over multiple smaller onesmany sub application or a big oneWhere to store data for Microservices Architecture?Splitting application into multiple but keeping database same
Another way of asking this is; why do programs tend to be monolithic?
I am thinking of something like an animation package like Maya, which people use for various different workflows.
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
design architecture maintainability application-design
New contributor
|
show 2 more comments
Another way of asking this is; why do programs tend to be monolithic?
I am thinking of something like an animation package like Maya, which people use for various different workflows.
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
design architecture maintainability application-design
New contributor
3
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.
– Laiv
13 hours ago
10
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
1
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
7
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago
|
show 2 more comments
Another way of asking this is; why do programs tend to be monolithic?
I am thinking of something like an animation package like Maya, which people use for various different workflows.
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
design architecture maintainability application-design
New contributor
Another way of asking this is; why do programs tend to be monolithic?
I am thinking of something like an animation package like Maya, which people use for various different workflows.
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
design architecture maintainability application-design
design architecture maintainability application-design
New contributor
New contributor
New contributor
asked 15 hours ago
dnvdnv
22115
22115
New contributor
New contributor
3
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.
– Laiv
13 hours ago
10
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
1
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
7
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago
|
show 2 more comments
3
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.
– Laiv
13 hours ago
10
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
1
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
7
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago
3
3
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.– Laiv
13 hours ago
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.– Laiv
13 hours ago
10
10
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
1
1
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
7
7
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago
|
show 2 more comments
6 Answers
6
active
oldest
votes
Yes. Generally 2 smaller less complex applications are much easier to maintain than a single large one.
However. You get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this Orchestration can go wrong in various ways, even though every app might function perfectly. Having a million tiny apps has its own special problems.
A monolithic app is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. Its only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y"
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
|
show 16 more comments
Does splitting a potentially monolithic application into several smaller ones help prevent bugs
Things are seldom that simple in reality.
Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.
However,
even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app
as Ewan already mentioned, the interaction of several components introduce additional risks and bugs
This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are.
So this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.
why do programs tend to be monolithic
Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).
would they not be easier to maintain
"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.
add a comment |
Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
add a comment |
I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.
Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.
You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.
What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
add a comment |
It's important to remember that correlation is not causation.
Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)
But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)
Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.
To quote C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:
The first method is far more difficult.
And later in the same source (the 1980 Turing Award Lecture):
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
add a comment |
This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.
Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.
A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.
The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.
When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.
New contributor
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: false,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
dnv is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f388461%2fdoes-splitting-a-potentially-monolithic-application-into-several-smaller-ones-he%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
StackExchange.ready(function ()
$("#show-editor-button input, #show-editor-button button").click(function ()
var showEditor = function()
$("#show-editor-button").hide();
$("#post-form").removeClass("dno");
StackExchange.editor.finallyInit();
;
var useFancy = $(this).data('confirm-use-fancy');
if(useFancy == 'True')
var popupTitle = $(this).data('confirm-fancy-title');
var popupBody = $(this).data('confirm-fancy-body');
var popupAccept = $(this).data('confirm-fancy-accept-button');
$(this).loadPopup(
url: '/post/self-answer-popup',
loaded: function(popup)
var pTitle = $(popup).find('h2');
var pBody = $(popup).find('.popup-body');
var pSubmit = $(popup).find('.popup-submit');
pTitle.text(popupTitle);
pBody.html(popupBody);
pSubmit.val(popupAccept).click(showEditor);
)
else
var confirmText = $(this).data('confirm-text');
if (confirmText ? confirm(confirmText) : true)
showEditor();
);
);
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
Yes. Generally 2 smaller less complex applications are much easier to maintain than a single large one.
However. You get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this Orchestration can go wrong in various ways, even though every app might function perfectly. Having a million tiny apps has its own special problems.
A monolithic app is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. Its only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y"
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
|
show 16 more comments
Yes. Generally 2 smaller less complex applications are much easier to maintain than a single large one.
However. You get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this Orchestration can go wrong in various ways, even though every app might function perfectly. Having a million tiny apps has its own special problems.
A monolithic app is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. Its only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y"
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
|
show 16 more comments
Yes. Generally 2 smaller less complex applications are much easier to maintain than a single large one.
However. You get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this Orchestration can go wrong in various ways, even though every app might function perfectly. Having a million tiny apps has its own special problems.
A monolithic app is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. Its only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y"
Yes. Generally 2 smaller less complex applications are much easier to maintain than a single large one.
However. You get a new type of bug when the applications all work together to achieve a goal. In order to get them to work together they have to exchange messages and this Orchestration can go wrong in various ways, even though every app might function perfectly. Having a million tiny apps has its own special problems.
A monolithic app is really the default option you end up with when you add more and more features to a single application. It's the easiest approach when you consider each feature on its own. Its only once it has grown large that you can look at the whole and say "you know what, this would work better if we separated out X and Y"
answered 15 hours ago
EwanEwan
41.2k33490
41.2k33490
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
|
show 16 more comments
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
3
3
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
Yes and there are also performance considerations e.g. the cost of passing around a pointer versus serializing data.
– JimmyJames
12 hours ago
18
18
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
"Generally 2 smaller less complex applications are much easier to maintain than a single large one." - that's true, except, when it is not. Depends heavily on where and how those two applications have to interface with each other.
– Doc Brown
10 hours ago
4
4
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
"Generally 2 smaller less complex applications are much easier to maintain than a single large one.". I think I'll want some more explanation for that. Why exactly would the process of generating two instead of one executable from a code base magically make the code easier? What decides how easy code is to reason about, is how tightly coupled it is and similar things. But that's a logical separation and has nothing to do with the physical one.
– Voo
9 hours ago
4
4
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
@Ew The physical separation does not force a logical separation, that's the problem. I can easily design a system where two separate applications are closely coupled. Sure there's some correlation involved here since people who spend the time to separate an application are most likely competent enough to consider these things, but there's little reason to assume any causation. By the same logic I can claim that using the latest C# version makes code much easier to maintain, since the kind of team that keeps up-to-date with their tools will probably also worry about maintenance of code.
– Voo
8 hours ago
3
3
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
While to a certain extent the idea of a "client/server" model for a single application has some interesting upsides (a pseudo protocol contract between binaries), from my experience the OS's inter-process communication APIs tend to have more overhead and are also harder to work with than working with libraries (which have a shared addr space). Unless one of your applications is useful to your audience standalone (and/or able to be coupled on the fly, like a dedicated server or utility program), I would recommend having a single application with dependencies in static/dynamic libraries instead.
– jrh
7 hours ago
|
show 16 more comments
Does splitting a potentially monolithic application into several smaller ones help prevent bugs
Things are seldom that simple in reality.
Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.
However,
even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app
as Ewan already mentioned, the interaction of several components introduce additional risks and bugs
This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are.
So this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.
why do programs tend to be monolithic
Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).
would they not be easier to maintain
"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.
add a comment |
Does splitting a potentially monolithic application into several smaller ones help prevent bugs
Things are seldom that simple in reality.
Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.
However,
even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app
as Ewan already mentioned, the interaction of several components introduce additional risks and bugs
This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are.
So this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.
why do programs tend to be monolithic
Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).
would they not be easier to maintain
"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.
add a comment |
Does splitting a potentially monolithic application into several smaller ones help prevent bugs
Things are seldom that simple in reality.
Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.
However,
even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app
as Ewan already mentioned, the interaction of several components introduce additional risks and bugs
This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are.
So this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.
why do programs tend to be monolithic
Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).
would they not be easier to maintain
"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.
Does splitting a potentially monolithic application into several smaller ones help prevent bugs
Things are seldom that simple in reality.
Splitting up does definitely not help to prevent those bugs in the first place. It can sometimes help to find bugs faster. An application which consists of small, isolated components may allow more individual (kind of "unit"-) tests for those components, which can make it sometimes easier to spot the root cause of certain bugs, and so allow it to fix them faster.
However,
even an application which appears to be monolithic from the outside may consist of a lot unit-testable components inside, so unit testing is not necessarily harder for a monolithic app
as Ewan already mentioned, the interaction of several components introduce additional risks and bugs
This depends also a lot on how well a larger app can split up into components, and how broad the interfaces between the components are.
So this is often a trade-off, and nothing where a "yes" or "no" answer is correct in general.
why do programs tend to be monolithic
Do they? Look around you, there are gazillions of Web apps in the world which don't look very monolithic to me, quite the opposite. There are also a lot of programs available which provide a plugin model (AFAIK even the Maya software you mentioned does).
would they not be easier to maintain
"Easier maintenance" here often comes from the fact that different parts of an application can be developed more easily by different teams, so better distributed workload, specialized teams with clearer focus, and on.
edited 6 hours ago
answered 14 hours ago
Doc BrownDoc Brown
135k23248400
135k23248400
add a comment |
add a comment |
Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
add a comment |
Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
add a comment |
Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.
Easier to maintain once you've finished splitting them, yes. But splitting them is not always easy. Trying to split off a piece of a program into a reusable library reveals where the original developers failed to think about where the seams should be. If one part of the application is reaching deep into another part of the application, it can be difficult to fix. Ripping the seams forces you to define the internal APIs more clearly, and this is what ultimately makes the code base easier to maintain. Reusability and maintainability are both products of well defined seams.
answered 11 hours ago
TKKTKK
406110
406110
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
add a comment |
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
great post. i think a classic/canonical example of what you talk about is a GUI application. many times a GUI application is one program and the backend/frontend are tightly-coupled. as time goes by issues arise... like someone else needs to use the backend but can't because it is tied to the frontend. or the backend processing takes too long and bogs down the frontend. often the one big GUI application is split up into two programs: one is the frontend GUI and one is a backend.
– Trevor Boyd Smith
9 hours ago
add a comment |
I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.
Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.
You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.
What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
add a comment |
I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.
Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.
You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.
What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
add a comment |
I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.
Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.
You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.
What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.
I'll have to disagree with the majority on this one. Splitting up an application into two separate ones does not in itself make the code any easier to maintain or reason about.
Separating code into two executables just changes the physical structure of the code, but that's not what is important. What decides how complex an application is, is how tightly coupled the different parts that make it up are. This is not a physical property, but a logical one.
You can have a monolithic application that has a clear separation of different concerns and simple interfaces. You can have a microservice architecture that relies on implementation details of other microservices and is tightly coupled with all others.
What is true is that the process of how to split up one large application into smaller ones, is very helpful when trying to establish clear interfaces and requirements for each part. In DDD speak that would be coming up with your bounded contexts. But whether you then create lots of tiny applications or one large one that has the same logical structure is more of a technical decision.
answered 9 hours ago
VooVoo
424410
424410
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
add a comment |
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
But what if one takes a desktop application with multiple editing modes and instead just makes one desktop application for each mode that a user would open individually rather than having interfacing. Would that not eliminate a nontrivial amount of code dedicated to producing the "feature" of "user can switch between editing modes"?
– The Great Duck
49 mins ago
add a comment |
It's important to remember that correlation is not causation.
Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)
But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)
Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.
To quote C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:
The first method is far more difficult.
And later in the same source (the 1980 Turing Award Lecture):
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
add a comment |
It's important to remember that correlation is not causation.
Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)
But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)
Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.
To quote C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:
The first method is far more difficult.
And later in the same source (the 1980 Turing Award Lecture):
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
add a comment |
It's important to remember that correlation is not causation.
Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)
But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)
Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.
To quote C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:
The first method is far more difficult.
And later in the same source (the 1980 Turing Award Lecture):
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
It's important to remember that correlation is not causation.
Building a large monolith and then splitting it up into several small parts may or may not lead to a good design. (It can improve the design, but it isn't guaranteed to.)
But a good design often leads to a system being built as several small parts rather than a large monolith. (A monolith can be the best design, it's just much less likely to be.)
Why are small parts better? Because they're easier to reason about. And if it's easy to reason about correctness, you're more likely to get a correct result.
To quote C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
If that's the case, why would anyone build an unnecessarily complicated or monolithic solution? Hoare provides the answer in the very next sentence:
The first method is far more difficult.
And later in the same source (the 1980 Turing Award Lecture):
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
answered 8 hours ago
Daniel PrydenDaniel Pryden
3,03811720
3,03811720
add a comment |
add a comment |
This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.
Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.
A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.
The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.
When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.
New contributor
add a comment |
This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.
Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.
A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.
The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.
When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.
New contributor
add a comment |
This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.
Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.
A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.
The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.
When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.
New contributor
This is not a question with a yes or no answer. The question is not just ease of maintenance, it is also a question efficient use of skills.
Generally, a well-written monolithic application is efficient. Inter-process and inter-device communication is not cheap. Breaking up a single process decreases efficiency. However, executing everything on a single processor can overload the processor and slow performance. This is the basic scalability issue. When the network enters the picture, the problem gets more complicated.
A well written monolithic application that can operate efficiently as a single process on a single server can be easy to maintain and keep free of defects, but still not be an efficient use of coding and architectural skills. The first step is to break the process into libraries that still execute as the same process, but are coded independently, following disciplines of cohesion and loose coupling. A good job at this level improves maintainability and seldom affects performance.
The next stage is to divide the monolith into separate processes. This is harder because you enter into tricky territory. It's easy to introduce race condition errors. The communication overhead increases and you must be careful of "chatty interfaces." The rewards are great because you break a scalability barrier, but the potential for defects also increases. Multi-process applications are easier to maintain on the module level, but the overall system is more complicated and harder to troubleshoot. Fixes can be devilishly complicated.
When the processes are distributed to separate servers or to a cloud style implementation, the problems get harder and the rewards greater. Scalability soars. (If you are considering a cloud implementation that does not yield scalability, think hard.) But the problems that enter at this stage can be incredibly difficult to identify and think through.
New contributor
New contributor
answered 1 hour ago
MarvWMarvW
111
111
New contributor
New contributor
add a comment |
add a comment |
dnv is a new contributor. Be nice, and check out our Code of Conduct.
dnv is a new contributor. Be nice, and check out our Code of Conduct.
dnv is a new contributor. Be nice, and check out our Code of Conduct.
dnv is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Software Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f388461%2fdoes-splitting-a-potentially-monolithic-application-into-several-smaller-ones-he%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
If the animation and modelling capabilities were split into their own separate application and developed separately, with files being passed between them, would they not be easier to maintain?
Don't mix easier to extend with easier to maintain a module -per se- isn't free of complications or dubious designs. Maya can be the hell on earth to maintain while its plugins are not. Or vice-versa.– Laiv
13 hours ago
10
I'll add that a single monolithic program tends to be easier to sell, and easier for most people to use.
– DarthFennec
10 hours ago
1
@DarthFennec The best apps look like one app to the user but utilize whatever is necessary under the hood. How many microservices power the various websites you visit? Almost none of them are monoliths anymore!
– corsiKa
10 hours ago
7
@corsiKa There's usually nothing to gain by writing a desktop application as multiple programs that communicate under the hood, that isn't gained by just writing multiple modules/libraries and linking them together into a monolithic binary. Microservices serve a different purpose entirely, as they allow a single application to run across multiple physical servers, allowing performance to scale with load.
– DarthFennec
10 hours ago
@corsiKa - I would guess that overwhelming number of websites I use are still monoliths. Most of the internet, after all, runs on Wordpress.
– Davor Ždralo
7 hours ago